AlgoMaster Newsletter

AlgoMaster Newsletter

Why is Redis so Fast and Efficient?

despite being single-threaded

Ashish Pratap Singh's avatar
Ashish Pratap Singh
May 21, 2025
∙ Paid
134
6
7
Share

Redis (Remote Dictionary Server) is a blazing-fast, open-source, in-memory key-value store that’s become a go-to choice for building real-time, high-performance applications.

Despite being single-threaded, a single Redis server can handle over 100,000 requests per second.

But, how does Redis achieve such incredible performance with a single-threaded architecture?

In this article, we’ll break down the 5 key design choices and architectural optimizations that make Redis so fast and efficient:

  • In-Memory Storage: Data lives entirely in RAM, which is orders of magnitude faster than disk.

  • Single-Threaded Event Loop: Eliminates concurrency overhead for consistent, low-latency performance.

  • Optimized Data Structures: Built-in structures like hashes, lists, and sorted sets are implemented with speed and memory in mind.

  • I/O Efficiency: Event-driven networking, pipelining, and I/O threads help Redis scale to thousands of connections.

  • Server-Side Scripting: Lua scripts allow complex operations to run atomically, without round trips.

Let’s get started!


1. In-Memory Storage

The single most important reason Redis is so fast comes down to one design decision:

All data in Redis lives in RAM.

Unlike traditional databases that store their data on disk and read it into memory when needed, Redis keeps the entire dataset in memory at all times.

Even with a fast SSD, reading from disk is thousands of times slower than reading from RAM.

So when Redis performs a GET, it doesn’t wait for disk I/O. It simply follows a pointer in memory—an operation that completes in nanoseconds, not milliseconds.

Redis doesn’t just store data in RAM, it stores it efficiently.

  • Small values are packed into compact memory formats (ziplist, intset, listpack)

  • These formats improve CPU cache locality, letting Redis touch fewer memory locations per command

But There’s a Trade-Off…

While in-memory storage gives Redis its speed, it also introduces two important limitations:

1. Memory-Bound Capacity

Your dataset size is limited by how much RAM your machine has. For example:

  • On a 32 GB server, Redis can only store up to 32 GB of data (minus overhead)

  • If you exceed this, Redis starts evicting keys or rejecting writes unless you scale horizontally

To deal with this, Redis offers key eviction policies like:

  • Least Recently Used (LRU)

  • Least Frequently Used (LFU)

  • Random

  • Volatile TTL-based eviction

You can also shard your dataset across a Redis Cluster.

2. Volatility & Durability

RAM is volatile. It loses data when the server shuts down or crashes. That’s risky if you’re storing anything you care about long term.

Redis solves this with optional persistence mechanisms, allowing you to write data to disk periodically or in real time.

Redis provides two main persistence models to give you durability without compromising performance:

  • RDB (Redis Database Snapshot)

    • Takes point-in-time snapshots of your data

    • Runs in a forked child process, so the main thread keeps serving traffic

    • Good for backups or systems that can tolerate some data loss

  • AOF (Append-Only File)

    • Logs every write operation to disk

    • Offers configurable fsync options:

      • Every write (safe but slow)

      • Every second (balanced)

      • Never (fast but risky)

    • Supports AOF rewriting in the background to reduce file size

These persistence methods are designed to run asynchronously, so the main thread never blocks.


2. Single-Threaded Event Loop

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Ashish Pratap Singh
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture