Redis Beyond Caching: Data Structures, Pub/Sub, and Real-Time Applications
Learn how to use Redis as more than a cache — explore its data structures, pub/sub messaging, streams, and patterns for building fast, scalable applications.
Most developers know Redis as "that fast cache." But Redis is a full-featured in-memory data structure server that can handle session management, real-time leaderboards, rate limiting, message queues, and much more — all at sub-millisecond latency.
Why Redis is so fast
Redis stores everything in memory and uses a single-threaded event loop to process commands. No disk I/O on reads, no lock contention, no context switching. The result: 100,000+ operations per second on modest hardware.
Typical latency comparison:
┌─────────────────┬──────────────┐
│ Operation │ Latency │
├─────────────────┼──────────────┤
│ Redis GET │ 0.1 ms │
│ PostgreSQL SEL │ 1-5 ms │
│ REST API call │ 50-200 ms │
│ Disk read │ 5-10 ms │
└─────────────────┴──────────────┘
Core data structures
Redis isn't just key-value. It supports rich data structures that map directly to common application needs.
Strings — the building block
Strings hold text, numbers, or binary data up to 512 MB:
SET user:1001:name "Alice"
GET user:1001:name # "Alice"
# Atomic counter
INCR page:views # 1, 2, 3, ...
INCRBY cart:total 2500 # Add 25.00 (store cents)
# Expiring keys (TTL)
SET session:abc123 "{...}" EX 3600 # Expires in 1 hour
TTL session:abc123 # Seconds remaining
Hashes — lightweight objects
Hashes are perfect for storing objects without serialization overhead:
HSET user:1001 name "Alice" email "alice@example.com" plan "pro"
HGET user:1001 name # "Alice"
HGETALL user:1001 # All fields and values
HINCRBY user:1001 logins 1 # Atomic field increment
Hashes use 10x less memory than storing each field as a separate key.
Lists — queues and feeds
Ordered collections that support push/pop from both ends:
LPUSH notifications:alice "New comment on your post"
LPUSH notifications:alice "You have a new follower"
LRANGE notifications:alice 0 9 # Latest 10 notifications
# Use as a queue (producer/consumer)
RPUSH queue:emails "send-welcome"
LPOP queue:emails # Process next job
Sets — unique collections
Unordered collections of unique strings:
SADD tags:post:42 "redis" "database" "nosql"
SMEMBERS tags:post:42 # All tags
SISMEMBER tags:post:42 "redis" # true
# Set operations
SINTER tags:post:42 tags:post:99 # Common tags
SUNION tags:post:42 tags:post:99 # All tags combined
Sorted Sets — leaderboards and rankings
Sets with a score for each member, automatically sorted:
ZADD leaderboard 1500 "alice" 2300 "bob" 1800 "charlie"
ZREVRANGE leaderboard 0 2 WITHSCORES # Top 3 players
ZRANK leaderboard "alice" # Rank (0-based)
ZINCRBY leaderboard 100 "alice" # Alice scores 100 points
This is how game leaderboards, trending posts, and priority queues are built at scale.
Real-world patterns
Pattern 1: Session storage
Instead of database-backed sessions, use Redis for instant lookups:
import redis
r = redis.Redis()
# Store session
r.setex(f"session:{token}", 3600, json.dumps(user_data))
# Retrieve session
data = r.get(f"session:{token}")
Why? Database sessions add 1-5ms per request. Redis sessions add 0.1ms. At 1000 req/s, that's 5 seconds saved every second.
Pattern 2: Rate limiting
Sliding window rate limiter in 3 commands:
# Allow 100 requests per minute per user
MULTI
INCR rate:user:1001
EXPIRE rate:user:1001 60
EXEC
# Check: if INCR result > 100, reject request
Pattern 3: Caching with cache-aside
The most common caching pattern:
def get_user(user_id):
# 1. Check cache
cached = redis.get(f"user:{user_id}")
if cached:
return json.loads(cached)
# 2. Cache miss → query database
user = db.query("SELECT * FROM users WHERE id = %s", user_id)
# 3. Populate cache (expire in 5 minutes)
redis.setex(f"user:{user_id}", 300, json.dumps(user))
return user
Pattern 4: Pub/Sub for real-time events
Redis Pub/Sub enables real-time messaging between services:
# Subscriber (listens for messages)
SUBSCRIBE chat:room:42
# Publisher (sends messages)
PUBLISH chat:room:42 "Hello everyone!"
In application code:
# Publisher
redis.publish("notifications", json.dumps({
"type": "new_order",
"order_id": 12345
}))
# Subscriber
pubsub = redis.pubsub()
pubsub.subscribe("notifications")
for message in pubsub.listen():
handle_notification(message)
Pattern 5: Distributed locks
Prevent race conditions across multiple servers:
# Acquire lock (NX = only if not exists, EX = expire)
SET lock:invoice:gen "worker-1" NX EX 30
# Release lock (only if you own it — use Lua script)
Redis Streams — event sourcing and message queues
Streams are Redis's answer to Kafka-style event logs:
# Add events
XADD orders * user_id 1001 product "Widget" qty 3
XADD orders * user_id 1002 product "Gadget" qty 1
# Read latest events
XRANGE orders - + COUNT 10
# Consumer groups (multiple workers)
XGROUP CREATE orders processors $ MKSTREAM
XREADGROUP GROUP processors worker-1 COUNT 5 BLOCK 2000 STREAMS orders >
Streams give you persistent, replayable event logs with consumer groups — perfect for microservices communication.
Performance tips
1. Use pipelining for bulk operations
Instead of 100 round trips, send 100 commands at once:
pipe = redis.pipeline()
for i in range(100):
pipe.set(f"key:{i}", f"value:{i}")
pipe.execute() # Single round trip
2. Choose the right data structure
- Need a counter? →
INCR(not GET + SET) - Need an object? → Hash (not serialized JSON string)
- Need unique items? → Set (not List + dedup logic)
- Need ranking? → Sorted Set (not Sort after query)
3. Set TTL on everything
Memory is finite. Always set expiration on cache keys:
SET cache:api:result "{...}" EX 300 # 5-minute TTL
4. Use key naming conventions
resource:id:field
user:1001:profile
cache:api:v2:products
session:abc123
queue:emails:pending
Persistence: RDB vs AOF
Redis can persist data to disk for durability:
| Mode | How it works | Trade-off |
|---|---|---|
| RDB | Point-in-time snapshots | Fast recovery, possible data loss |
| AOF | Logs every write operation | Slower, minimal data loss |
| Both | RDB + AOF combined | Best durability |
# In redis.conf
save 900 1 # Snapshot if 1 key changed in 900s
appendonly yes # Enable AOF
appendfsync everysec # Fsync every second
Quick command reference
Need to look up a specific Redis command? Use our Redis Cheat Sheet — it covers 200+ commands organized by data structure with syntax, examples, and one-click copy.
Wrapping up
Redis is far more than a cache. Its data structures map cleanly to real application needs:
- Strings → sessions, counters, simple cache
- Hashes → user profiles, configuration objects
- Lists → queues, activity feeds, recent items
- Sets → tags, unique visitors, relationships
- Sorted Sets → leaderboards, rankings, priority queues
- Streams → event sourcing, message queues
- Pub/Sub → real-time notifications, chat
Start with caching, but don't stop there. Redis can replace several pieces of infrastructure in your stack — and do it all at sub-millisecond speed.