Can PostgreSQL Replace Redis? Performance, Cost, and Migration Insights

This article examines how PostgreSQL can take over Redis' roles—caching, pub/sub, and job queues—by leveraging unlogged tables, LISTEN/NOTIFY, and SKIP LOCKED, presenting cost savings, benchmark results, migration steps, and code examples for a practical backend replacement.

dbaplus Community
dbaplus Community
dbaplus Community
Can PostgreSQL Replace Redis? Performance, Cost, and Migration Insights

Background

The original architecture used PostgreSQL for persistent storage and Redis for caching, pub/sub, and background job processing. Maintaining two separate systems introduced additional operational overhead and a second point of failure.

Why Replace Redis?

Cost : AWS ElastiCache (2 GB) costs $45 / month and $110 / month for 5 GB, while a 20 GB PostgreSQL RDS instance costs $50 / month with only $0.5 extra for additional storage.

Operational complexity : Backups, monitoring, and failover had to be managed for both databases.

Data consistency : PostgreSQL transactions can invalidate cache entries atomically, eliminating stale‑data scenarios when Redis is unavailable.

PostgreSQL Features Used as Redis Replacements

1. Unlogged tables for caching

CREATE UNLOGGED TABLE cache (
  key TEXT PRIMARY KEY,
  value JSONB NOT NULL,
  expires_at TIMESTAMPTZ NOT NULL
);
CREATE INDEX idx_cache_expires ON cache(expires_at);

-- Insert / update (upsert) with TTL
INSERT INTO cache (key, value, expires_at)
VALUES ($1, $2, NOW() + INTERVAL '1 hour')
ON CONFLICT (key) DO UPDATE
  SET value = EXCLUDED.value,
      expires_at = EXCLUDED.expires_at;

-- Read if not expired
SELECT value FROM cache WHERE key = $1 AND expires_at > NOW();

-- Periodic cleanup
DELETE FROM cache WHERE expires_at < NOW();

Unlogged tables skip WAL, providing faster writes at the cost of acceptable data loss for cache data.

2. LISTEN/NOTIFY for pub/sub

-- Publisher (SQL)
NOTIFY notifications, '{"userId":123,"msg":"Hello"}';

-- Subscriber (Node.js)
const { Client } = require('pg');
const client = new Client({ connectionString: process.env.DATABASE_URL });
await client.connect();
await client.query('LISTEN notifications');
client.on('notification', msg => {
  const payload = JSON.parse(msg.payload);
  console.log(payload);
});

Latency is slightly higher than Redis (2‑5 ms vs 1‑2 ms) but works inside a transaction.

3. SKIP LOCKED for job queues

WITH next_job AS (
  SELECT id FROM jobs
  WHERE queue = $1
    AND attempts < max_attempts
    AND scheduled_at <= NOW()
  ORDER BY scheduled_at
  LIMIT 1
  FOR UPDATE SKIP LOCKED
)
UPDATE jobs SET attempts = attempts + 1
FROM next_job
WHERE jobs.id = next_job.id
RETURNING *;

This pattern creates a lock‑free queue. Measured latency is ~0.3 ms per dequeue, comparable to Redis BRPOP (0.1 ms).

4. JSONB for session storage

CREATE TABLE sessions (
  id TEXT PRIMARY KEY,
  data JSONB NOT NULL,
  expires_at TIMESTAMPTZ NOT NULL
);
CREATE INDEX idx_sessions_expires ON sessions(expires_at);

-- Upsert session
INSERT INTO sessions (id, data, expires_at)
VALUES ($1, $2, NOW() + INTERVAL '24 hours')
ON CONFLICT (id) DO UPDATE
  SET data = EXCLUDED.data,
      expires_at = EXCLUDED.expires_at;

-- Read session
SELECT data FROM sessions WHERE id = $1 AND expires_at > NOW();

-- Query inside JSONB
SELECT * FROM sessions WHERE data->>'userId' = '123';

Complex queries on session data are possible, which Redis cannot perform natively.

Benchmark Results (AWS RDS db.t3.medium)

Cache set: Redis 0.05 ms → PostgreSQL 0.08 ms (+60 %)

Cache get: Redis 0.04 ms → PostgreSQL 0.06 ms (+50 %)

Pub/Sub latency: Redis 1.2 ms → PostgreSQL 3.1 ms (+158 %)

Queue push: Redis 0.08 ms → PostgreSQL 0.15 ms (+87 %)

Queue pop: Redis 0.12 ms → PostgreSQL 0.31 ms (+158 %)

All PostgreSQL operations stayed under 1 ms, eliminating the extra network hop to Redis.

Migration Plan

Phase 1 – Coexistence (Week 1)

// Write to both stores
await redis.set(key, value);
await pg.query('INSERT INTO cache ...');

// Read from Redis (primary)
let data = await redis.get(key);

Monitor hit‑rate and latency.

Phase 2 – Read from PostgreSQL first (Week 2)

let result = await pg.query('SELECT value FROM cache WHERE key = $1', [key]);
let data = result.rows[0]?.value;
if (!data) data = await redis.get(key);

Track error rates and performance.

Phase 3 – Write only to PostgreSQL (Week 3)

await pg.query('INSERT INTO cache ...');

Verify full functionality.

Phase 4 – Decommission Redis (Week 4)

# Stop Redis service
# Observe for errors – if none, migration is complete

Optimization Tips

Use a connection pool (e.g., pg.Pool) with appropriate size.

Create indexes on cache keys, expiration columns, and pending jobs.

Tune PostgreSQL parameters: shared_buffers, effective_cache_size, work_mem.

Run regular VACUUM ANALYZE on cache and job tables.

Decision Matrix

PostgreSQL is a good fit when you need transactional consistency, have modest traffic (≤ 100 k ops/sec), want to reduce infrastructure complexity, and can tolerate a 0.1‑1 ms latency increase. Keep Redis for ultra‑high‑throughput workloads, Redis‑specific data structures (sorted sets, HyperLogLog, streams), or when a dedicated cache layer is required for micro‑service architectures.

Conclusion

Replacing Redis with PostgreSQL saved roughly $100 / month, cut backup and monitoring complexity by about 50 %, and simplified deployment. The latency penalty is modest (~0.5 ms for cache reads). For small‑to‑medium applications with simple caching, session storage, and background job needs, PostgreSQL can serve as a single source of truth. High‑performance or feature‑rich use cases still benefit from a dedicated Redis layer.

backendMigrationPerformanceRedisCachingPostgreSQL
dbaplus Community
Written by

dbaplus Community

Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.