Why Using Only Postgres Can Replace Redis, RabbitMQ, and Elasticsearch
The article argues that a single PostgreSQL instance can handle caching, queuing, full‑text search, and real‑time notifications, eliminating the need for separate services like Redis, RabbitMQ, and Elasticsearch, while reducing cost and complexity.
Over the past months the author struggled with the common temptation among indie developers and startups to stitch together a complex stack—Redis for caching, RabbitMQ for queues, Elasticsearch for search, and MongoDB for flexible storage. After building the feedback and roadmap tool UserJot, they questioned whether a single PostgreSQL database could replace all these components.
1. The Myth of PostgreSQL’s Lack of Scalability
Many believe PostgreSQL is just a relational database that needs auxiliary tools for specific tasks. Real‑world examples—Instagram, Discord, Notion—show that a well‑designed PostgreSQL deployment can support millions of users and billions of messages without external services.
2. Queue System Using Native LISTEN/NOTIFY
PostgreSQL’s LISTEN/NOTIFY can serve as a lightweight message queue, eliminating the need for Redis or RabbitMQ. Example implementation:
-- Simple job queue in pure Postgres
CREATE TABLE job_queue (
id SERIAL PRIMARY KEY,
job_type VARCHAR(50),
payload JSONB,
status VARCHAR(20) DEFAULT 'pending',
created_at TIMESTAMP DEFAULT NOW(),
processed_at TIMESTAMP
);
BEGIN;
UPDATE job_queue
SET status = 'processing', processed_at = NOW()
WHERE id = (
SELECT id FROM job_queue
WHERE status = 'pending'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 1
) RETURNING *;
COMMIT;This approach guarantees exactly‑once processing within a single transaction and removes the overhead of a separate broker.
3. Key‑Value Store with JSONB
PostgreSQL’s JSONB type provides a built‑in key‑value store, avoiding a dedicated Redis instance. Example schema and query:
-- Your Redis alternative
CREATE TABLE kv_store (
key VARCHAR(255) PRIMARY KEY,
value JSONB,
expires_at TIMESTAMP
);
CREATE INDEX idx_kv_value ON kv_store USING GIN (value);
SELECT * FROM kv_store WHERE value @> '{"user_id": 12345}';The @> operator enables fast nested JSON queries, often outperforming NoSQL solutions while preserving ACID guarantees.
4. Full‑Text Search Inside PostgreSQL
Instead of an Elasticsearch cluster, PostgreSQL’s built‑in full‑text search can be used:
ALTER TABLE posts ADD COLUMN search_vector tsvector;
CREATE OR REPLACE FUNCTION update_search_vector() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('english', COALESCE(NEW.title, '') || ' ' || COALESCE(NEW.content, ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
SELECT title, ts_rank(search_vector, query) AS rank
FROM posts, to_tsquery('startup & postgres') query
WHERE search_vector @@ query
ORDER BY rank DESC;This provides fuzzy matching, stemming, and relevance ranking without external infrastructure.
5. Real‑Time Updates via LISTEN/NOTIFY
PostgreSQL can push real‑time notifications to clients:
CREATE OR REPLACE FUNCTION notify_changes() RETURNS trigger AS $$
BEGIN
PERFORM pg_notify('table_changes', json_build_object(
'table', TG_TABLE_NAME,
'action', TG_OP,
'data', row_to_json(NEW)
)::text);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;Applications listening to table_changes can update UI instantly, removing the need for a separate WebSocket or Redis pub/sub layer.
6. Hidden Costs of a “Specialized” Stack
A typical modern stack incurs $125 /month for Redis, a message queue, a search service, and monitoring. Beyond hosting, operational overhead includes managing multiple services, handling different failure modes, maintaining backups, and addressing distinct security concerns.
7. Scaling a Single PostgreSQL Instance
PostgreSQL can handle millions of transactions, terabytes of data, and thousands of concurrent connections. Large‑scale examples include Airbnb, Robinhood, and GitLab. When horizontal scaling is required, options such as read replicas, partitioning, connection pooling, and logical replication are available.
8. Avoid Over‑Engineering
Many teams design for hypothetical future traffic, adding unnecessary components. The recommended approach is to start with PostgreSQL, monitor real bottlenecks, scale specific components only when needed, and keep the architecture simple.
9. When Dedicated Tools Are Justified
Specialized services become worthwhile only under extreme conditions: >100k jobs per minute, sub‑millisecond cache latency, TB‑scale analytics, millions of concurrent users, or strict global consistency requirements.
10. Practical Steps for the Next Project
Begin with PostgreSQL only; avoid adding other databases prematurely.
Leverage JSONB for schema‑less flexibility while retaining SQL power.
Implement queues directly in PostgreSQL to cut costs and complexity.
Add specialized tools only after genuine bottlenecks are identified.
11. Author’s Real‑World Experience
UserJot was built entirely on a single PostgreSQL instance—handling authentication, feedback submission, full‑text search, real‑time notifications, and a key‑value store. The result was faster feature delivery, fewer components to debug, and minimal infrastructure cost.
12. Conclusion
PostgreSQL’s versatility lets it serve as the primary database, cache, queue, search engine, and real‑time system, preserving ACID transactions across all operations. Over‑reliance on specialized tools can add unnecessary complexity and expense, especially for startups that need to solve real user problems quickly.
dbaplus Community
Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
