Skip to content

Crash Recovery

Your server currently saves data on clean shutdown but loses everything if it crashes. In this stage, you’ll add durability so data survives unexpected failures.

Implement a Write-Ahead Log (WAL) that records operations before they’re applied to memory. Each write operation must be written to the log file before updating your in-memory store.

Your log should record operations in append-only fashion. The format is up to you - JSONL (one JSON object per line), binary serialization, or plain text all work.

Each log entry needs enough information to replay the operation:

  • Operation type (e.g., “set”, “delete”, “clear”)
  • Key
  • Value
  • Any other metadata you need for replay

After appending an operation to the log, ensure it’s physically written to disk before responding to the client. Use your language’s file sync mechanism (fsync, flush, etc.) to force the operating system to persist the write.

Without sync, the OS may buffer writes in memory and you’ll lose data on crash.

Syncing on every write is slow: you’re blocking the response on a disk round-trip, and holding locks during that I/O serializes concurrent writers. This is the right trade-off for durability in a simple implementation. Production databases amortize the cost by batching multiple operations into a single fsync.

When your server starts:

  1. Load the most recent snapshot (from the persistence stage) if one exists
  2. Replay all operations from the WAL that occurred after the snapshot
  3. Resume serving requests

If no snapshot exists, replay the entire log from the beginning.

As your log grows, replaying from the beginning becomes slow. Periodically create snapshots of your in-memory state and truncate the log.

When to checkpoint is up to you - after N operations, every M seconds, when the log reaches a certain size, etc. The test doesn’t care about your checkpoint strategy, only that recovery works correctly.

After creating a snapshot:

  1. Write the snapshot to a new file
  2. Truncate or create a new WAL file
  3. Continue logging operations

On recovery, load the latest snapshot and replay only the operations logged after that snapshot.

You now have two types of files:

  • Snapshot: Full state at a point in time (from previous stage)
  • WAL: Operations logged since the last snapshot

Organize these in the working directory however makes sense - separate files, subdirectories, naming conventions, etc. The test only cares that recovery works, not how you structure the files.

The test harness mounts a persistent volume at /app/data and sets the DATA_DIR environment variable to /app/data, same as the previous stage.

Your server will be tested with unexpected crashes:

Terminal window
$ clstr test crash-recovery
Testing crash-recovery: Data Survives SIGKILL
✓ Data Survives a Hard Crash
✓ All Data Survives Repeated Hard Crashes
✓ Rapid Sequential Writes Survive a Hard Crash
✓ Rapid Concurrent Writes Survive a Hard Crash
✓ CLEAR Survives a Hard Crash
PASSED ✓
Run 'clstr next' to advance to the next stage.

The tests will:

  1. Store data in your server
  2. Kill the server process (SIGKILL) without warning
  3. Restart your server
  4. Verify all data that was acknowledged before the crash is still present

Your server’s output (stdout/stderr) is captured during testing and viewable with clstr logs. The KILLED and STARTED markers show exactly when the crash and restart occurred:

Terminal window
$ clstr logs n1
================ STARTED ================
Server listening on 0.0.0.0:8080
PUT /kv/canada:capital accepted, value=Ottawa
PUT /kv/brazil:capital accepted, value=Brasilia
PUT /kv/australia:capital accepted, value=Canberra
PUT /kv/japan:capital accepted, value=Tokyo
Appended 4 entries to /app/data/wal.log
================ KILLED ================
================ STARTED ================
Server listening on 0.0.0.0:8080
Replaying 4 entries from /app/data/wal.log
GET /kv/canada:capital returning 200
GET /kv/brazil:capital returning 200
GET /kv/australia:capital returning 200
GET /kv/japan:capital returning 200