stevencodes.swe - Jun 1, 2025

Hey there 👋

This week, I want to talk about a sneaky source of deploy pain: indexing in production.

You’ve probably heard that CREATE INDEX CONCURRENTLY is the “safe” way to go - but that’s only part of the story. In practice, building indexes in prod can still lead to stalled deploys, I/O spikes, and nasty surprises if you’re not careful about how and when you do it.

In this excerpt from Chapter 2 of the book, I break down the real-world best practices I follow when adding indexes to live systems - how to keep your deploys smooth, your database fast, and your team sane.

Let’s dive in 👇

From The BackEnd Lowdown: Best Practices for Real-World Deployments

Every newsletter will include a snippet from my book in progress, The Backend Lowdown, available for $1 right now on Gumroad!

Adding indexes in production isn't just about syntax - it's about timing, safety, and minimizing risk. These best practices go beyond CREATE INDEX CONCURRENTLY to help you avoid downtime, reduce deploy friction, and make your indexing workflow smoother across environments.

Keep Concurrent Indexes Out of Migration Pipelines

Don't block your deploy pipeline waiting for a large index to build.

Instead: Merge a no-op placeholder migration (for historical clarity), and run the actual CREATE INDEX CONCURRENTLY manually or in a separate, async deploy step.

Example:

class AddIndexOnOrdersUserId < ActiveRecord::Migration[8.0]
  disable_ddl_transaction!

  def up
    # No-op: Index will be created manually
    execute <<~SQL
      CREATE INDEX CONCURRENTLY IF NOT EXISTS index_orders_on_user_id ON orders (user_id);
    SQL
  end
​
  def down
    execute <<~SQL
      DROP INDEX CONCURRENTLY IF EXISTS index_orders_on_user_id;
    SQL
  end
end

Monitor While Building

Even with CONCURRENTLY, index creation can slow down or stall. Large tables take time to scan. High write volume means Postgres has to track and replay changes during a second "catch-up" phase. And if disk I/O is constrained or your table is under heavy contention, the build can drag or fail.

Monitoring progress with pg_stat_progress_create_index helps you catch these issues early and avoid surprises.

SELECT * FROM pg_stat_progress_create_index;

This view shows how far along the build is, which phase it's in, and whether it's stalling.

Avoid Concurrent Indexes in High-Churn Tables During Peak Load

Concurrent index creation avoids locking writes, but still reads the whole table. On a massive, frequently updated table, it can:

  • Create I/O pressure

  • Cause contention on hot pages

  • Take a long time

If possible, schedule index creation for off-peak hours or ensure replicas are healthy.

Backend Tip: Dropping Empty Tables Can Still Deadlock

Here’s a sneaky one that actually bit us in production this week:
Dropping a table - even if it’s completely empty - can still cause a deadlock.

Why? Foreign key constraints.

When you drop a table that’s referenced by other tables, Postgres has to acquire locks not just on the table being dropped, but also on all related tables. If another transaction already holds a conflicting lock (say, from a write), you’ve got a recipe for a deadlock - even if no rows exist.

Tip: Before dropping a table in production, check for foreign keys and consider scheduling it during low traffic windows or using a maintenance migration strategy.

Small table, big consequences. 🧨

In Case You Missed It: Circuit Breakers

If you've ever had a background job queue go haywire because a third-party API was down, you’re not alone - and that’s exactly what circuit breakers are built to prevent.

In this short video, I explain the circuit breaker pattern:
why it matters, how it works, and how it can stop retries from thrashing your systems.

Watch it here 👇

@stevencodes.swe

Your API is down. Do you keep retrying… or back off? Here’s how circuit breakers protect your system from cascading failures and help it s... See more

That’s a wrap for this week. If something here made your day smoother, feel free to reply and tell me about it. And if you think a friend or teammate would enjoy this too, I’d be grateful if you shared it with them.

Until next time,
- Steven