- stevencodes.swe
- Posts
- stevencodes.swe - Nov 2, 2025
stevencodes.swe - Nov 2, 2025
Queue design tips, book snippet
š Hey friends,
Hereās what Iāve got in store for you this week:
A snippet from Chapter 4 of The Backend Lowdown
A bit about queue design
Letās get into it š
The Backend Lowdown: Chapter 4 Preview
Every newsletter will include a snippet from my book in progress, The Backend Lowdown, available for $5 right now on Gumroad!
Get The Backend Lowdown āKey Design & Versioning
Good cache key design is the difference between a cache that "just works" and one that serves wrong data, explodes in size, or requires complex invalidation logic. With a few simple conventions, you can build keys that are correct, memory-efficient, and trivial to invalidate.
The strategy rests on two principles:
Include every input that affects the output in your cache key. If changing something produces different results, it must be in the key.
Use version segments instead of wildcards for invalidation - bump a version number to invalidate groups of keys without expensive scans or pattern matching.
When it helps
Users see each other's data because you forgot to include user ID or locale in the cache key
You can't figure out which list/search result caches to invalidate when a single item changes
Your cache hit rate is terrible because timestamps in keys create infinite unique entries
You're hitting key length limits in Memcached (250 chars) or Redis memory is exploding
# Build keys with ALL inputs that affect the output
def product_entity_key(id, tenant:, locale:, currency:, role:, flags:)
CacheKeys.entity(
entity: "product",
id: id,
tenant: tenant,
inputs: {
locale: locale, # Different languages = different cache entries
currency: currency, # USD vs EUR pricing
role: role, # Admin sees different fields than customer
flags: flags.sort # Feature flags affect output
}
)
end
def category_list_key(category_id, tenant:, page:, sort:, locale:)
CacheKeys.collection_key(
namespace: "products:category",
scope_id: category_id, # Which category's products
tenant: tenant,
page: page, # Pagination
sort: sort, # price_asc vs name_desc
inputs: { locale: locale }
)
end
# Invalidation is now trivial - just bump the version!
def bump_category_version!(category_id)
key = CacheKeys.collection_version_key(
namespace: "products:category",
scope_id: category_id
)
redis.incr(key) # Atomic increment - all old cache entries instantly unreachable
endRecent Thoughts: Queue Design

Bucketing by SLA
Something Iāve been thinking about lately is queue design. There are some good resources out there, such as Sidekiq in Practice for Rails users, but for a lot of cases, it boils down to just a few core principles:
Bucket by SLA
Rather than breaking out a queue for each āconceptā, e.g., mailers, payments, fulfillment, etc., break your queues down by SLA instead. Concepts mix slow and fast jobs, causing head-of-line blocking and messy capacity planning. Segregating by SLA drastically simplifies the cognitive load and complexity of your queues and will make tightening up your queue performance a lot easier. By SLA, we mean how long it takes to process a job, so instead of the above scheme, try something like realtime (<5 seconds), near_real_time (<60s), batch (mins+).
Once you have your queues sectioned out by SLA, you need to worry about capacity.
Plan Capacity With Littleās Law
Littleās Law has a simple formula: L = Ī» Ć W where:
L = jobs in system
Ī» = arrival rate
W = average time in system (target SLA)
As an example:
Ī» = 5 jobs/sec, W = 60s ā Max WIP L ⤠300. If the queue holds more than 300, youāre breaking SLA.
As a capacity sanity check:
Workers needed ā Ī» / μ (μ = jobs/worker/sec) + 20ā50% headroom for spikes/retries.
Using the above approach, you can see some drastic improvements in the performance of your background job processing. Of course, donāt forget your typical guard rails such as exponential backoff + jitter, using a DLQ, and per-queue rate limits!
Thatās a wrap for this week. If something here made your day smoother, feel free to reply and tell me about it. And if you think a friend or teammate would enjoy this too, Iād be grateful if you shared it with them.
Until next time,
Steven