Crash Recovery
Your server currently saves data on clean shutdown but loses everything if it crashes. In this stage, you’ll add durability so data survives unexpected failures.
Write-Ahead Logging
Section titled “Write-Ahead Logging”Implement a Write-Ahead Log (WAL) that records operations before they’re applied to memory. Each write operation must be written to the log file before updating your in-memory store.
Log Format
Section titled “Log Format”Your log should record operations in append-only fashion. The format is up to you - JSONL (one JSON object per line), binary serialization, or plain text all work.
Each log entry needs enough information to replay the operation:
- Operation type (e.g., “set”, “delete”, “clear”)
- Key
- Value
- Any other metadata you need for replay
Durability
Section titled “Durability”After appending an operation to the log, ensure it’s physically written to disk before responding to the client. Use your language’s file sync mechanism (fsync, flush, etc.) to force the operating system to persist the write.
Without sync, the OS may buffer writes in memory and you’ll lose data on crash.
Syncing on every write is slow: you’re blocking the response on a disk round-trip, and holding locks during that I/O serializes concurrent writers. This is the right trade-off for durability in a simple implementation. Production databases amortize the cost by batching multiple operations into a single fsync.
Recovery Procedure
Section titled “Recovery Procedure”When your server starts:
- Load the most recent snapshot (from the persistence stage) if one exists
- Replay all operations from the WAL that occurred after the snapshot
- Resume serving requests
If no snapshot exists, replay the entire log from the beginning.
Checkpointing
Section titled “Checkpointing”As your log grows, replaying from the beginning becomes slow. Periodically create snapshots of your in-memory state and truncate the log.
When to checkpoint is up to you - after N operations, every M seconds, when the log reaches a certain size, etc. The test doesn’t care about your checkpoint strategy, only that recovery works correctly.
After creating a snapshot:
- Write the snapshot to a new file
- Truncate or create a new WAL file
- Continue logging operations
On recovery, load the latest snapshot and replay only the operations logged after that snapshot.
Storage Layout
Section titled “Storage Layout”You now have two types of files:
- Snapshot: Full state at a point in time (from previous stage)
- WAL: Operations logged since the last snapshot
Organize these in the working directory however makes sense - separate files, subdirectories, naming conventions, etc. The test only cares that recovery works, not how you structure the files.
Testing
Section titled “Testing”The test harness mounts a persistent volume at /app/data and sets the DATA_DIR environment variable to /app/data, same as the previous stage.
Your server will be tested with unexpected crashes:
$ clstr test crash-recoveryTesting crash-recovery: Data Survives SIGKILL
✓ Data Survives a Hard Crash✓ All Data Survives Repeated Hard Crashes✓ Rapid Sequential Writes Survive a Hard Crash✓ Rapid Concurrent Writes Survive a Hard Crash✓ CLEAR Survives a Hard Crash
PASSED ✓
Run 'clstr next' to advance to the next stage.The tests will:
- Store data in your server
- Kill the server process (SIGKILL) without warning
- Restart your server
- Verify all data that was acknowledged before the crash is still present
Debugging
Section titled “Debugging”Your server’s output (stdout/stderr) is captured during testing and viewable with clstr logs. The KILLED and STARTED markers show exactly when the crash and restart occurred:
$ clstr logs n1
================ STARTED ================
Server listening on 0.0.0.0:8080PUT /kv/canada:capital accepted, value=OttawaPUT /kv/brazil:capital accepted, value=BrasiliaPUT /kv/australia:capital accepted, value=CanberraPUT /kv/japan:capital accepted, value=TokyoAppended 4 entries to /app/data/wal.log
================ KILLED ================
================ STARTED ================
Server listening on 0.0.0.0:8080Replaying 4 entries from /app/data/wal.logGET /kv/canada:capital returning 200GET /kv/brazil:capital returning 200GET /kv/australia:capital returning 200GET /kv/japan:capital returning 200