stevencodes.swe - July 13, 2025

Skip lists deep dive, expanding into IG, another dev tool rec

👋 Hey friends,

This week’s newsletter has some good nuggets, from backend insights to creator updates. Here’s what I’ve got:

  • Another peak into Chapter 3 from The Backend Lowdown

  • Creator update: expanding into Instagram

  • Video deep dive: Skip Lists

  • A dev tool recommendation for simple website analytics

Let’s get into it! 👇️ 

The Backend Lowdown: Chapter 3 Preview

Every newsletter will include a snippet from my book in progress, The Backend Lowdown, available for $1 right now on Gumroad!

Batch Patterns

Sometimes you can't eager load everything ahead of time. Maybe the data is conditional, deeply nested, or only accessed after the main query has run. In those cases, eager loading can't save you, but batch loading can.

Batch loading is a pattern that groups multiple similar queries together into one. Instead of firing a separate query every time you load an association, you collect all those requests and resolve them in bulk.

Dataloader is a common implementation of this pattern, especially in GraphQL backends. It batches all .load(id) calls during the same execution tick, fetches the data in a single query, and caches the results for the duration of the request. That means you don't just reduce query count-you eliminate redundant work too. I recommend looking it up if you're working in a GraphQL API!

A Classic Example (GraphQL-Ruby)

Let's say your Comment type has a field for author, and you're returning many comments at once:

# Naive (N+1)
def author
  User.find(object.author_id)
end

This runs one query per comment. With 100 comments, that's 100 queries.

Now with BatchLoader:

# Efficient
def author
  BatchLoader::GraphQL.for(object.author_id).batch do |author_ids, loader|
    User.where(id: author_ids).each do |user|
      loader.call(user.id, user)
    end
  end
end

Now all author_id values get collected and loaded in a single query - one query total, no matter how many comments.

Creator Update: Expanding Into Instagram

Just a quick heads-up: I’ve started posting to Instagram using the same handle, @stevencodes.swe. If you hang out there more than TikTok, feel free to follow me there! Oh, and I also created a Threads account so you might see me pop up there too.

Video Deep Dive: Skip Lists

This week my Skip List TikTok blew past 80K views in just a couple of days 🎉 and the flood of comments told me one thing: you want more than a 60-second overview. Viewers asked how skip lists fit into real systems, why they’re favored over balanced trees, and even how to tune them for production.

Skip lists aren’t just academic curiosities or interview fodder. They power core features in Redis (ZRANGE/sorted-set lookups), underpin LevelDB’s in-memory write buffer, and even accelerate search indexes in Lucene and Elasticsearch. By leaning on simple, randomized towers instead of complex rotations, they deliver tree-like performance with list-like code.

In this deep dive, we’ll recap the basics and take a look at a micro benchmark I ran on my Macbook comparing them to red-black trees which had some surprising results. Let’s take a look!

Recapping The Basics

At its core, a skip list is just a sorted linked list with “express lanes” that let you jump ahead. Instead of strictly balancing child pointers (like in a red-black or AVL tree), each node flips a coin to decide how tall its “tower” should be. Nodes with more heads get promoted into higher levels, creating shortcuts over the base list.

Why randomization?

  • On average, each coin flip has a 50 % chance of heads, so the expected tower height is just 2.

  • You still get O(log N) search, insert, and delete on average, without any of the headache of rotations or color‐flip rules.

  • If you ever see a runaway height (lots of heads in a row), you simply cap the maximum level (e.g. 32 or 64) to guard against pathological cases.

Here’s what the structure looks like:

Searching in a skip list

  • Level 0 is the full list of all keys.

  • Higher levels contain only those nodes that “won” enough coin flips.

  • Search starts at the top-left HEAD, hops forward until it would overshoot, then drops down one level and repeats until it lands on the target key.

That simple “skip and drop” pattern is all you need to match tree-like performance with linked-list simplicity.

Benchmarking

I was curious how a skip list stacks up in practice, so I threw together a quick benchmark comparing it to a red-black tree using these two JavaScript packages:

Here are the results:

Skip list vs red-black tree performance on Node v22.17.0

In short, for a single-threaded throughput race in Node.js, red-black trees win on constant factors: fewer hops, tighter memory, and engine-friendly branching.

That said, skip lists still have their charm: no rotations, easier lock-free concurrency, and tunable performance. So while they lost the raw ops/sec race here, they hold their own in real-world systems where those features matter.

In fact, I was able to tune them quite easily by adjusting p, the probability of the coin toss, within the benchmark script here:

const sl = new SkipList({ p: 0.33, maxHeight: 16 }); // use 0.33 to get better performance

If you’re curious, here’s the micro benchmark script: https://pastebin.com/6Q0rPqgx

See the video here:

@stevencodes.swe

Skip lists are like express lanes for linked lists 🚀 fast, simple, and used in real systems like redis. Here’s how they work, level by lev... See more

Dev Tool Recommendation: Umami

After putting up my personal website, stevencodes.dev, 8 days ago, I knew I had to figure out how it was doing and that meant getting some visitor analytics. I reached for a tool I’ve used several times in the past: Umami, a lightweight open source analytics platform.

Why Umami? Well, first and foremost: it’s free! 😅 But beyond that, it gives you surprisingly good insights for something you can set up in just minutes. I didn’t need (or want) anything intrusive, I just wanted to know:

  • How many people are visiting?

  • How often are my page links getting clicked?

Umami makes this dead simple. I just needed to do the following (I use the hosted version, so I just signed up for an account):

  1. Add the script tag to the page header

<script defer src="https://cloud.umami.is/script.js" data-website-id="YOUR_ID_HERE"></script>

Just adding that one line was enough to start tracking visits. Pretty powerful for a single tag!

  1. Tag the links to track clicks

<a
  href="https://stevencodes-swe.beehiiv.com/"
  class="link"
  target="_blank"
  rel="noopener noreferrer"
  data-umami-event="Subscribe Newsletter">
  <span class="icon">📧</span>
  Subscribe to my weekly newsletter
</a>

That was literally it. Here are some of my website’s statistics in case you’re interested:

Traffic over the past 7 days 😅 

So if you need some quick, rudimentary analytics for free, I recommend checking out Umami!

That’s a wrap for this week. If something here made your day smoother, feel free to reply and tell me about it. And if you think a friend or teammate would enjoy this too, I’d be grateful if you shared it with them.

Until next time,
Steven