Maximum throughputs for a Single Databse Instances
You will notice in the table below, all NoSQL databases have higher read throughputs compared to MySQL or Postgres, because former uses Log Structured Merged (LSM) Tree based indexing whereas Postegres and MySQL uses B-trees. Why? Find the anwer in Martin Kleppman's book!
But ElasticSearch being a specialized full-text search system, is obvously slower for OLTP use cases.
Important caveats:
Numbers assume:
- High-end server hardware (good CPU/RAM)
- Data fits in memory
- Simple queries/operations
- Optimal schema design and indexing
- No network bottlenecks
- Batch operations might achieve higher throughput
Real-world performance often 5-10x lower due to:
- Complex queries
- Disk I/O
- Network latency
- Resource contention
- Transaction overhead
Database | Read QPS | Write QPS | Notes |
---|---|---|---|
PostgreSQL | 25K | 5K | Simple queries, connection pooling, well-indexed tables |
MySQL | 25K | 5K | Similar to Postgres, with InnoDB engine |
Redis | 150K | 100K | Simple GET/SET, data fits in memory |
Neo4j | 10K | 1K-2K | Simple graph traversals, cached data |
MongoDB | 50K | 10K | Simple documents, data in memory |
Cassandra | 20K | 15K | Single node, optimized for writes |
ClickHouse | 100K | 50K | Single node, columnar storage |
ElasticSearch | 15K | 5K | Single node, document queries |