Skip to content

Distributed Key-Value Store

Welcome to the distributed key-value store challenge!

You’ll build a distributed key-value store from scratch using Raft consensus, the same algorithm that powers etcd and Consul. Start with a single-node system that handles persistence and crash recovery, then implement leader election, log replication, and dynamic cluster membership.

This is the first project in clstr.io’s distributed systems series. It teaches consensus-based replication; later projects will teach different patterns like leaderless replication, CRDTs, and Byzantine fault tolerance.

Build a basic in-memory key-value store with GET/PUT/DELETE operations over HTTP.

Add persistence to your store. Data should survive clean shutdowns (SIGTERM).

Ensure data consistency after crashes. Data should survive unclean shutdowns (SIGKILL).

Form a cluster and elect a leader using the Raft consensus algorithm.

Replicate operations from the leader to followers with strong consistency guarantees.

Prevent unbounded log growth through snapshots and log truncation.

Add and remove nodes from the cluster one at a time without downtime.

Add and remove multiple nodes at once using joint consensus.

If you haven’t already, read this overview on how it works and then start with stage 1 (HTTP API).