Start now →

Postgres vs MongoDB (Technical Benchmark and Analysis)

By kesavmurthy Kandeeban · Published April 28, 2026 · 12 min read · Source: Fintech Tag
TradingRegulation
Postgres vs MongoDB (Technical Benchmark and Analysis)

Postgres vs MongoDB (Technical Benchmark and Analysis)

A latency benchmark across joins, transactions, hotspot updates, and bulk inserts and what it means for trading platform architecture

kesavmurthy Kandeebankesavmurthy Kandeeban12 min read·Just now

--

Press enter or click to view image in full size
Postgresql vs MongoDB

Introduction

This blog compares PostgreSQL and MongoDB across core trading workloads bulk and concurrent inserts, aggregations, joins/lookups, hotspot updates, and ACID transactions executed at 10K, 30K, and 50K operations (three runs each) to validate performance consistency.

Oh boy, Not an Ideal topic but here we go!

TL;DR

Six workloads. Three runs each. 10K to 50K operations. PostgreSQL 16 vs MongoDB 7, on identical hardware and datasets measuring P50 through P99 latency throughout. PostgreSQL won five of six: concurrent inserts by 12x, aggregations by 2.7x, lookup/joins by 121x, hotspot updates by 10.6x, and transactions at equivalent production durability by 6.4x on localhost wider on a real cluster. MongoDB won bulk insert at 3.9x, with the caveat that it ran one index to PostgreSQL’s three. The methodology, raw numbers, and architectural reasoning are all below. If you want the conclusion first: for transactional financial workloads, PostgreSQL is not a preference, it’s the correct choice.

Benchmark Design and Setup

Each design decision below was deliberate. The goal was to avoid the most common ways benchmarks accidentally favor one database over the other.

Benchmark Results: Deep Dive

Latency tables for PostgreSQL and MongoDB, showing P50/P75/P95/P99 across all runs (values in µs).

Concurrent Insert: All 3 Runs (microseconds)

Aggregation: All 3 Runs (microseconds, 1M row dataset)

Lookup / Join: All 3 Runs (microseconds, 1M row dataset)

Press enter or click to view image in full size

Hotspot Update: All 3 Runs (microseconds)

Transactions: All 3 Runs, All Variants (microseconds)

INSIGHT Stability check: All three runs show consistent results. The Mongo WMajority P50 stays between 6,498us and 7,295us across all runs. PostgreSQL WAL P50 stays between 1,142us and 1,327us. The 6x gap is stable, not a one-run artifact.

Bulk Insert: The Index Cost Reality

Bulk insert loads 1,500,000 order documents in a single call. Three variants are measured: Postgres COPY (reference, bypasses SQL parser), Postgres UNNEST (real application path through the SQL engine), and MongoDB InsertMany with ordered=false (enables parallel WiredTiger writes).

Bulk Insert Caveat: MongoDB maintained only the _id index during bulk insert and during reads Index is applied, while PostgreSQL maintained 3 (symbol, price+qty, user_id). This is the primary reason for MongoDB's 3.9x advantage here. Add equivalent indexes to MongoDB and the gap narrows considerably this result should not be generalized.

In this benchmark, MongoDB outperforms PostgreSQL on bulk inserts primarily because of a much lighter write path. MongoDB’s InsertMany is optimized for high-throughput ingestion, batching documents and writing them with minimal overhead especially in this setup where only the _id index is maintained. This keeps both CPU and disk I/O low, allowing it to scale efficiently as the number of operations increases.

On the other hand, PostgreSQL does more work per insert. Each row must update multiple indexes, strict type validations, be recorded in the WAL (Write-Ahead Log), followed by fsync to guarantee durability. These steps ensure strong consistency and crash safety, but they also introduce additional latency and I/O pressure. Even fast paths like COPY can’t fully avoid this overhead when indexes and durability guarantees are in place.

So the difference isn’t a flaw it’s a design trade-off. MongoDB is optimized for fast, flexible ingestion with fewer constraints, while PostgreSQL prioritizes data integrity, rich indexing, and transactional guarantees, which naturally makes its insert path heavier.

Concurrent Insert P50 Latency Comparison (Run 3, c=30, microseconds)

INSIGHT Postgres concurrent insert P50 is 10–16x faster than MongoDB across all three runs at matching concurrency. Postgres MVCC row-level locking allows all 30 goroutines to insert without blocking each other. MongoDB’s WiredTiger serializes more aggressively under concurrent single-document inserts.

Aggregation: Index Architecture Matters

Aggregation groups 1,000,000 BTC-USD orders by price and sums quantity. This is the core price-book aggregation query for any exchange given a symbol, what is the total volume at each price level? Both databases fully drain the cursor before the timer stops.

Aggregation Latency (Microseconds) on 1,000,000 Row Dataset

Aggregation P50 Latency by Run (milliseconds, lower is better)

At P50, Postgres is consistently 2.5–3x faster due to the covering index-only scan. At P95/P99, the gap narrows at high concurrency.

Lookup / Join: The Most Critical Gap for FinTech

The lookup test performs a self-join on the orders dataset using user_id, retrieving all other orders for the same user across 10 input records. With 1,000 unique users over 1,000,000 rows, each user maps to roughly 1,000 orders (institutions and HFT traders).

Why This Test is the Most Important in a FinTech Context

Every meaningful query in a financial platform is a join:

In a MongoDB-only architecture, teams attempting to avoid $lookup (due to its performance) embed related data directly in the same document. Let’s examine what that costs.

Lookup / Join Latency (Microseconds) on 1,000,000 Row Dataset

Lookup / Join P50 Latency Comparison (Run 3, c=10, microseconds)

CRITICAL The lookup gap is 50–120x in this benchmark. This is an architectural difference in query execution models, not a tuning problem. No amount of indexing or configuration makes MongoDB’s $lookup approach the algorithmic efficiency of a relational hash join.

Hotspot Update: Price Level Updates

The hotspot test simulates 30 goroutines concurrently updating one of 100 price-level rows. With 30 workers and 100 possible price values, collision probability is 30%. This models the core write pattern of an exchange where many concurrent threads incrementing or decrementing volume at a specific price level.

Hotspot Update Latency (Microseconds)

Hotspot Update P50 Latency (Run 3, c=30, microseconds)

Postgres is 10x faster at P50 due to HOT (Heap Only Tuple) updates. Because quantity is not in any index, and the table has 30% free space per page (fillfactor=70), Postgres writes the new tuple version into the same heap page as the old one. No index page is written. MongoDB must update the document and maintain the index entry on every operation.

Transactions: The Production Durability Comparison

The transaction test is the most consequential for financial systems. Each transaction does two things: increment quantity on a price-level row (simulating a fill), and insert a new order document (simulating the fill record). Three variants are measured to show the full picture.

Transaction Latency (Microseconds): All Runs and Variants

Transaction P50 Latency: All Variants (Run 3, c=30)

CRITICAL On localhost: Mongo WMajority (7,295µs) vs PostgreSQL WAL (1,142µs) 6.4x faster. On a real 3-node cluster: the gap widens to 8–11x due to replication round-trips. The 6.4x is a floor, not a ceiling.

Benchmark Summary

postgreSQL outperforms MongoDB in 5 of 6 workloads, with margins ranging from 2.7x (aggregation) to 121x (lookup/join). which is in a nutshell,

Press enter or click to view image in full size
Press enter or click to view image in full size
Mongodb is faster in Bulk insert where is postgres is faster in rest of the case

In a nutshell,

Press enter or click to view image in full size

Postgres is faster in the most of the workload*!

* Atleast for the scenarios I tested

PostgreSQL vs MongoDB Transactions: What Actually Happens

PostgreSQL (MVCC + WAL)

PostgreSQL uses Multi-Version Concurrency Control (MVCC) to handle transactions efficiently(Transactions have been a core part of the PostgreSQL since the early design). Instead of overwriting rows, each update creates a new version tagged with transaction IDs (xmin/xmax). This ensures:

Durability is handled by the Write-Ahead Log (WAL). Every change is written sequentially to the WAL before hitting disk, and a COMMIT triggers an fsync():

Key takeaway: On a single node COMMIT = durable. No network hop, no extra coordination cost. Postgres can be tuned to write heavy works based on the signales, RTO requirement, durability requirements and so on.

MongoDB (Transactions + Write Concern)

MongoDB introduced ACID transactions in 4.0 using WiredTiger MVCC (Transactions was not part of the Mongodb from the beginning). However, durability depends on write concern, reflecting its distributed-first design:

This introduces a fundamental tradeoff:

Contention & Commit Behavior

Under write contention, architectural differences become more visible:

MongoDB

All writes are eventually serialized in the oplog order, which can introduce contention under high write throughput.

PostgreSQL

Transaction P50 at c=30 (Run 3, Microseconds)

Note: Mongo w:majority here is measured on localhost. In a real 3-node setup, expect increase in latency per commit.

Immutable WAL Is Not Oplog

At a glance, PostgreSQL WAL and MongoDB oplog may look similar they both record changes and power replication. But architecturally, they solve very different problems.

PostgreSQL’s Write-Ahead Log (WAL) is a low-level, physical log. Every change whether it’s a row insert, index update, or visibility map tweak is recorded as a byte-level modification to a specific page. These records are compact, binary, and strictly deterministic: apply the same WAL, and you get the exact same database state every time.

Three properties make WAL especially powerful for financial systems:

This combination gives PostgreSQL something critical: perfect historical reconstruction. You can replay the database to any exact moment in time with confidence.

MongoDB’s oplog, on the other hand, is a logical operation log. Each entry is a BSON document describing what changed (insert/update/delete), not how the bytes changed on disk. While this makes it more human-readable and flexible, it also makes it heavier and less deterministic.

The bigger issue is structural: the oplog is a capped collection.

Once it fills up, older entries are overwritten.

That introduces a real operational risk:

PostgreSQL avoids this class of failure entirely. WAL segments are retained until replicas consume them. If a replica lags, disk usage grows — but data safety is never compromised silently.

Why This Matters: Point-in-Time Recovery (PITR)

For trading systems or regulated platforms, this isn’t just an implementation detail it’s a requirement.

Frameworks like Financial Crimes Enforcement Network, Markets in Crypto-Assets, and Financial Conduct Authority expect systems to reconstruct exact historical states for audits and investigations.

PostgreSQL handles this natively:

No external dependency. Fully deterministic.

However MongoDB provides PITR via Atlas and Ops Manager, but self-managed setups require more operational effort compared to PostgreSQL’s native WAL-based recovery.

Summary

PostgreSQL wins 5 of 6 workloads, and the margins aren’t close.

Lookup / join is the most critical gap for financial systems 121x. Every meaningful query in a trading platform is a join: order history, portfolio aggregation, risk exposure, trade settlement, compliance reporting. This is an architectural difference, not a tuning problem. No amount of indexing makes MongoDB’s $lookup approach the efficiency of a relational hash join.

Concurrent inserts run 12x faster on PostgreSQL. Hotspot updates the core write pattern of an exchange incrementing volume at a price level run 10.6x faster, thanks to HOT updates that skip index writes entirely. Transactions at equivalent production durability (WMajority vs WAL) run 6.4x faster. Aggregation runs 2.7x faster via covering index-only scans.

MongoDB wins only bulk insert, at 3.9x but it maintained 1 index to PostgreSQL’s 3. Close that gap and the advantage shrinks considerably.

But,

Press enter or click to view image in full size

if you think I’m against mongodb you are wrong, As a redemption arc I will explain where to use mongodb as well

When MongoDB Isn’t the Wrong Choice

MongoDB gets dismissed unfairly in finance contexts but there are real use cases where it’s the better fit:

High-volume event ingestion and Timeseries data. If you’re writing millions of raw trade events, logs, or market data ticks with a flexible or evolving schema and minimal read-back, MongoDB’s lightweight write path is genuinely faster and operationally simpler comparing to postgresql.

Document-shaped data with no joins. User preferences, notification configs, audit trails, unstructured metadata anything that maps naturally to a document and is never joined to other collections. Forcing this into relational tables adds schema overhead with no performance benefit.

Early-stage products with changing schemas. Schema migrations in PostgreSQL require care at scale. If your data model is still evolving rapidly, MongoDB’s flexibility has real value with the understanding that you’ll pay the join penalty later as the product matures.

Complementary to PostgreSQL in a hybrid architecture. Many mature fintech systems use both: PostgreSQL for transactional core (orders, positions, settlements) and MongoDB for adjacent workloads (analytics events, audit logs, document storage). This is not an anti-pattern it’s a deliberate separation of concerns.

The mistake isn’t using MongoDB. It’s using it as the primary store for a workload that’s fundamentally relational. So it is not MongoDB vs Postgresql but more of a Mongodb Along with Postgresql.

Press enter or click to view image in full size

Reference:
PostgreSQL WAL documentation: postgresql.org/docs/current/wal-intro.html

MongoDB Write Concern: mongodb.com/docs/manual/reference/write-concern/

PostgreSQL HOT updates: postgresql.org/docs/current/storage-hot.html

MongoDB Oplog documentation: mongodb.com/docs/manual/core/replica-set-oplog/

This article was originally published on Fintech Tag and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →