카테고리 없음

Redis at Scale: 8 Patterns for Ruby Applications

programming-for-us 2025. 11. 17. 21:51
반응형

Redis at scale shines when patterns are chosen deliberately for event propagation, coordination, approximate analytics, atomicity, and durability. Pub/Sub vs Streams for event propagation, distributed locks with Redlock and contention handling, HyperLogLog and Bloom filters for cardinality/exists checks, Lua scripting for atomic multi-key operations, and snapshotting and AOF strategies for durability together form a practical toolkit for high-throughput Ruby applications.kanado2000.tistory+1

Pub/Sub vs Streams for event propagation

Pub/Sub vs Streams for event propagation is the first decision point: Redis Pub/Sub blasts messages to active subscribers without persistence, while Redis Streams persist messages with IDs, retention policies, and consumer groups. Pub/Sub vs Streams for event propagation favors Pub/Sub for fire-and-forget real-time fan-out like live notifications, whereas Streams enable durable queues, replay, and backpressure handling for pipelines and jobs.hellointerview+1

Pub/Sub vs Streams for event propagation compares delivery semantics and durability; Pub/Sub loses messages if no subscriber is listening, while Streams store entries and let consumer groups acknowledge processing. Pub/Sub vs Streams for event propagation also affects scaling choices—Streams fit offline delivery, retries, and consumer parallelism, while Pub/Sub excels at ultra-low-latency transient broadcasts.kanado2000.tistory+1

Pub/Sub vs Streams for event propagation in Ruby often starts with Redis clients that implement XADD, XREADGROUP, and XACK for Streams, and SUBSCRIBE/PUBLISH for Pub/Sub, wiring background workers to consume reliably. Pub/Sub vs Streams for event propagation should be selected per use case, sometimes pairing Pub/Sub for instant UI pushes with Streams for durable processing behind the scenes.hellointerview+1

Distributed locks with Redlock and contention handling

Distributed locks with Redlock and contention handling coordinate exclusive access to scarce resources across Ruby processes and nodes. Distributed locks with Redlock and contention handling use multiple Redis masters or a clustered setup to acquire a majority of locks with TTLs, mitigating single-node failures and clock drift.kanado2000.tistory+1

Distributed locks with Redlock and contention handling must implement jittered backoff and deadlines to avoid stampedes under contention, and they should be reserved for short, critical sections. Distributed locks with Redlock and contention handling should not replace database constraints for core invariants; use them as advisory locks to limit throughput while persisting truth in a transactional store.hellointerview+1

Distributed locks with Redlock and contention handling in Ruby are typically wrapped in ensure blocks to guarantee release, with renewal (“lock keepalive”) only when the critical section is provably safe to extend. Distributed locks with Redlock and contention handling also benefit from metrics—lock wait time, acquisition failure rate, and TTL expirations—to spot hotspots.kanado2000.tistory+1

HyperLogLog and Bloom filters for cardinality/exists checks

HyperLogLog and Bloom filters for cardinality/exists checks provide memory-efficient approximations for large sets. HyperLogLog and Bloom filters for cardinality/exists checks let Ruby apps count unique users or events (PFADD/PFCOUNT) and test membership with controllable false-positive rates (Bloom filters), trading exactness for speed and footprint.hellointerview+1

HyperLogLog and Bloom filters for cardinality/exists checks are ideal ahead of expensive work, e.g., skipping costly deduplication when the Bloom filter says “not present,” or estimating reach without allocating gigabytes for exact sets. HyperLogLog and Bloom filters for cardinality/exists checks should be tuned for error bounds and periodically reset or merged to manage drift over time windows.kanado2000.tistory+1

HyperLogLog and Bloom filters for cardinality/exists checks integrate well with event ingestion paths and dashboards, providing near-real-time metrics with tiny memory overhead compared to hash sets. HyperLogLog and Bloom filters for cardinality/exists checks also play nicely with sharded keys to spread load across Redis Cluster slots.hellointerview+1

Lua scripting for atomic multi-key operations

Lua scripting for atomic multi-key operations turns sequences of Redis commands into a single, atomic execution on the server. Lua scripting for atomic multi-key operations eliminates race conditions in counters, inventory reservations, and composite cache updates by evaluating scripts with EVAL/EVALSHA.kanado2000.tistory+1

Lua scripting for atomic multi-key operations supports validation-then-set patterns, multi-read/multi-write updates, and conditional invalidation, all without exposing intermediate states to other clients. Lua scripting for atomic multi-key operations requires careful key passing and time limits; keep scripts deterministic, small, and side-effect-free outside Redis to preserve latency.hellointerview+1

Lua scripting for atomic multi-key operations in Ruby typically preloads scripts and calls them by SHA for performance, with error handling that falls back gracefully when scripts are flushed. Lua scripting for atomic multi-key operations should also include telemetry on script runtimes and failures to prevent tail-latency surprises.kanado2000.tistory+1

Snapshotting and AOF strategies for durability

Snapshotting and AOF strategies for durability determine how Redis persists data, balancing performance with recovery guarantees. Snapshotting and AOF strategies for durability include RDB snapshots for point-in-time saves and AOF for append-only command logs, which can be fsynced on every write, every second, or left to the OS.hellointerview+1

Snapshotting and AOF strategies for durability often combine both: periodic RDB for fast, compact backups and AOF for minimizing data loss between snapshots. Snapshotting and AOF strategies for durability should consider rewrite policies, background save overhead, and AOF rewrite thresholds to avoid blocking under heavy write loads.kanado2000.tistory+1

Snapshotting and AOF strategies for durability must be paired with replication and cluster failover realities; asynchronous replication risks last-second loss, so critical systems may choose stricter fsync or multi-region redundancy. Snapshotting and AOF strategies for durability also require restore drills and version pinning so crash recovery behaves predictably in production.hellointerview+1

Hot key mitigation and client-side sharding

Hot key mitigation and client-side sharding address uneven load when a single key receives disproportionate traffic. Hot key mitigation and client-side sharding can duplicate the same value across multiple keys with a random suffix and read from a random replica to spread QPS.kanado2000.tistory+1

Hot key mitigation and client-side sharding can add a small in-process cache in Ruby for extremely hot items, reducing round trips to Redis. Hot key mitigation and client-side sharding should be observable—track per-key hit rates and latencies to detect skew early and rebalance.hellointerview+1

Rate limiting with sliding windows and tokens

Rate limiting with sliding windows and tokens relies on Redis atomic operations to enforce fair use on APIs and background jobs. Rate limiting with sliding windows and tokens uses INCR with TTL, sorted sets, or Lua scripts to implement token buckets or sliding windows with accurate per-identity enforcement.kanado2000.tistory+1

Rate limiting with sliding windows and tokens must define fail-open or fail-closed modes during Redis outages, along with sensible key TTLs to prevent unbounded growth of counters. Rate limiting with sliding windows and tokens benefits from sharding keys by user or region to reduce lock contention in clusters.hellointerview+1

Caching patterns and versioned keys

Caching patterns and versioned keys keep data fresh without stampeding backend stores. Caching patterns and versioned keys use cache-aside for most dynamic data, write-through for critical consistency, and negative caching for common misses.kanado2000.tistory+1

Caching patterns and versioned keys employ version suffixes and “generational keys” to invalidate safely on deploys or content updates, avoiding stale reads across Ruby processes and regions. Caching patterns and versioned keys should track hit ratio, evictions, and per-command latency to tune TTLs and memory policy in production.hellointerview+1

Bringing the patterns together

Redis at scale in Ruby succeeds by using Pub/Sub vs Streams for event propagation based on delivery semantics, relying on distributed locks with Redlock and contention handling only for short critical sections, applying HyperLogLog and Bloom filters for cardinality/exists checks in analytics-heavy paths, harnessing Lua scripting for atomic multi-key operations where consistency matters, and choosing snapshotting and AOF strategies for durability that match business RPO/RTO. Redis at scale then rounds out with hot key mitigation and client-side sharding, rate limiting with sliding windows and tokens, and caching patterns and versioned keys to stabilize latency and cost as throughput grows.kanado2000.tistory+1

  1. https://kanado2000.tistory.com/138
  2. https://www.hellointerview.com/learn/system-design/deep-dives/redis?dslateid=cmhb20nc3000n08ad9shxeset&dslateposition=1
반응형