Postgres Can Do That: Features You’re Paying Other Tools to Replicate
Your stack probably includes Redis for caching, Elasticsearch for search, a separate job queue, maybe a time-series database. Each adds operational overhead, synchronisation complexity, and another line item on your infrastructure bill.
Meanwhile, Postgres has been quietly shipping features that replicate most of what these tools do. Not “good enough for prototypes” — genuinely production-ready capabilities that teams discover only after they’ve already committed to a multi-database architecture.
Here’s what you’re likely paying for twice.
Full-Text Search That Actually Works
The reflexive reach for Elasticsearch often happens before anyone checks what Postgres can do natively. That’s a mistake worth real money.
Postgres ships with tsvector and tsquery types that support stemming, ranking, phrase matching, and boolean operators. Add a GIN index and you get sub-millisecond searches across millions of documents.
CREATE INDEX idx_articles_search ON articles USING GIN (to_tsvector('english', title || ' ' || body));
SELECT title, ts_rank(to_tsvector('english', title || ' ' || body), query) AS rank
FROM articles, plainto_tsquery('english', 'distributed systems') query
WHERE to_tsvector('english', title || ' ' || body) @@ query
ORDER BY rank DESC;
This handles 80% of search use cases: product catalogues, documentation search, content discovery, autocomplete. You get highlighting, fuzzy matching with pg_trgm, and weighted ranking across multiple fields.
The trade-off is real: Elasticsearch wins on faceted search, complex aggregations, and truly massive scale. But most teams deploy Elastic for problems Postgres already solves, then spend engineering cycles keeping two systems in sync.
If your search requirements don’t include multi-tenancy across billions of documents or real-time analytics dashboards, benchmark Postgres first. The operational simplicity of one database handling both storage and search compounds over time.
LISTEN/NOTIFY: The Queue You Already Have

Redis queues and dedicated message brokers solve real problems. They also introduce failure modes: what happens when your queue and database disagree about state?
Postgres has a built-in pub/sub system most teams never use:
-- Publisher
NOTIFY order_created, '{"order_id": 12345, "customer": "acme"}';
-- Subscriber (in application code)
LISTEN order_created;
This is transactional. The notification fires only if the transaction commits. No two-phase coordination, no eventual consistency headaches between your database and queue.
For patterns like “notify a service when a row changes” or “trigger a background job after insert,” this eliminates an entire category of infrastructure. Combine it with triggers and you get change data capture without Debezium.
The limitations matter: NOTIFY doesn’t persist messages, doesn’t guarantee delivery to offline consumers, and has payload size limits. It’s not a replacement for Kafka when you need replay, partitioning, or durable event sourcing.
But for “fire webhook when order completes” or “invalidate cache on update”? You’re running a separate queue for something your database does atomically.
JSONB: The Document Store Inside Your Relational Database
The MongoDB conversation usually starts with “we need flexibility” and ends with “we need joins.” Postgres offers both.
JSONB stores schemaless documents with full indexing support. You can query nested fields, create partial indexes on specific keys, and mix document-style flexibility with relational integrity:
CREATE TABLE events (
id SERIAL PRIMARY KEY,
event_type TEXT NOT NULL,
payload JSONB NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_events_payload ON events USING GIN (payload jsonb_path_ops);
SELECT * FROM events
WHERE payload @> '{"user_id": 42, "action": "purchase"}';
This isn’t a compromise — JSONB performance matches or beats document databases for most access patterns. GIN indexes make containment queries fast. You can JOIN your flexible event log with your normalised user table in a single query.
The pattern that works: strict relational schemas for core domain entities, JSONB columns for metadata, audit trails, and genuinely variable attributes. You get the query flexibility of documents with referential integrity where it matters.
Time-Series and Analytics Without Another Database
TimescaleDB exists because Postgres extension architecture allows it to. But even without extensions, Postgres handles time-series workloads better than many teams realise.
Window functions, BRIN indexes on timestamp columns, and table partitioning combine to make time-series queries fast and storage efficient:
CREATE TABLE metrics (
time TIMESTAMPTZ NOT NULL,
device_id INT NOT NULL,
value DOUBLE PRECISION
) PARTITION BY RANGE (time);
CREATE INDEX idx_metrics_time ON metrics USING BRIN (time);
BRIN indexes are tiny — orders of magnitude smaller than B-tree — and perfect for append-only time-series data where rows correlate with physical storage order.
For IoT dashboards, application metrics, or financial data, this setup handles millions of rows with straightforward queries. You lose the automatic compression and continuous aggregates that TimescaleDB provides, but you avoid another dependency.
The Real Cost Calculation
Every additional database in your stack carries hidden costs:
- Operational overhead: backups, monitoring, upgrades, security patches — multiplied.
- Synchronisation bugs: data that exists in one system but not another, race conditions, eventual consistency surprises.
- Cognitive load: engineers context-switching between query languages, consistency models, and failure modes.
- Latency: network hops between services that could be single-database transactions.
Postgres extensions like PostGIS, pgvector, and pg_cron expand these capabilities further. Vector similarity search for AI applications, geospatial queries, scheduled jobs — all without leaving your primary database.
The contrarian position isn’t “never use specialised tools.” It’s “prove you need them before adopting them.” Most startups would ship faster and operate more reliably with one well-tuned Postgres instance than with a constellation of purpose-built databases.
When to Actually Reach for Something Else
Postgres has limits. Elasticsearch genuinely outperforms it for log analytics at scale. Redis is faster for ephemeral caching with sub-millisecond requirements. Kafka handles event streaming workloads Postgres shouldn’t attempt.
The decision framework: start with Postgres, measure actual performance, and add complexity only when you have data showing you need it. The burden of proof should be on the additional tool, not on Postgres.
If you’re evaluating whether your infrastructure complexity is actually necessary, or just inherited, that’s exactly the kind of architectural review we help with at Koalabs. Sometimes the best system is the one you already have.
Postgres doing all that? My mind is officially blown!