stevencodes.swe - August 17, 2025

Dev tool radar, weekly video highlight

👋 Hey friends,

Here’s what I’ve got in store for you this week:

  • Snippet from The Backend Lowdown

  • Weekly Video Highlight: Race Condition

  • Dev Tool Radar: Conductor

Let’s get into it 👇

The Backend Lowdown: Chapter 3 Preview

Every newsletter will include a snippet from my book in progress, The Backend Lowdown, available for $1 right now on Gumroad!

OFFSET Pagination Solutions (Continued)

Solution 2: Cursor-Based Pagination (Keyset in Disguise)

Cursor pagination is keyset pagination with the implementation details hidden behind an opaque token.

def encode_cursor(last_record)
  Base64.encode64("#{last_record.created_at.to_i}:#{last_record.id}")
end

def decode_cursor(cursor)
  timestamp, id = Base64.decode64(cursor).split(':')
  [Time.at(timestamp.to_i), id.to_i]
end

Why bother with encoding?

  • API stability: You can change the underlying implementation without breaking clients

  • Security: Users can't guess or manipulate cursor values

  • Flexibility: Cursors can encode multiple sort fields, filters, or even query versions

Real-world example: GitHub's GraphQL API uses cursor pagination exclusively. Their cursors are base64-encoded tokens that hide the underlying implementation details, allowing GitHub to change how pagination works without breaking client applications.

Solution 3: The Hybrid Approach (Pragmatic Reality)

Sometimes you need page numbers for UX but want to avoid the worst performance cliffs. The hybrid approach sets boundaries:

class Product < ApplicationRecord
  MAX_PAGE = 100
  
  def self.paginate(page: 1, per: 20)
    raise "Page #{page} exceeds maximum of #{MAX_PAGE}" if page > MAX_PAGE
    
    # Use OFFSET for allowed range
    offset((page - 1) * per).limit(per)
  end
end

But wait, this still uses OFFSET!

Yes, but with crucial differences:

  • By capping at page 100, your worst-case query only skips 2,000 rows, not 2 million

  • You can pre-warm caches for common pages (1-10)

  • Deep pagination becomes a search problem, not a browsing problem

Weekly Video Highlight: Race Condition

The best performing video this past week was about a common gotcha in backend systems. See if you can spot the issue with this code:

if !User.exists?(email: params[:email])
  User.create!(email: params[:email])
end

It looks totally fine, right? If the user doesn’t exist, go ahead and create it! There’s a problem though: what happens if there are two incoming requests with the same email? If you don’t have a unique index, you’ll get duplicate records. If you do, one request will succeed while the other will raise a unique violation error. If there’s some rescue and retry logic, you may even get duplicate side effects!

This sparked a lot of engagement with comments like:

  • No one would send the same request for the same user email twice.

    Absolutely false, I’ve fixed this issue many times in production systems - think about a miswired signup or purchase button and a user double clicking. It happens more often than you think!

  • I’ll just use a critical section instead.
    Unfortunately that’s a per-process lock and production systems almost always have more than a single running process. Think about multiple web servers, background job runners, etc.

  • Why not just keep the code as-is and let the unique constraint raise?

    First of all, there’s no need to make 2 trips to the database with this code (the existence check followed by the create). Second, this is a race condition and it’s never a good idea to keep a race condition around. Third, handling control flow with an exception is typically an anti-pattern and handing a unique constraint violation back to the client is not the best UX.

The solution? First, add a unique index on the email column. Then use an atomic write to let the database arbitrate and handle the case where the email is already taken:

INSERT INTO users (email)
VALUES ($1)
ON CONFLICT (email) DO NOTHING;

You’ll also want to pair this with an idempotency key so retries don’t duplicate work.

If you want to watch the full video and/or add to the discussion, you can watch it here:

@stevencodes.swe

A few simple lines of code causing random 500s: exists? ➡️ create. Fix it with a unique constraint + INSERT..ON CONFLICT and idempotency k... See more

Dev Tool Radar

I’ve only just set this up, so I can’t quite recommend it yet, but I’ve been looking at Conductor for running Claude Code in parallel. It uses git worktrees under the hood to let you run multiple instances of Claude Code. If you’re using it already, reply to this and let me know what you think! If you’re not, give it a try! Tools that allow you to orchestrate many LLM instances / conversations at once seem like a natural progression for us all.

That’s a wrap for this week. If something here made your day smoother, feel free to reply and tell me about it. And if you think a friend or teammate would enjoy this too, I’d be grateful if you shared it with them.

Until next time,
Steven