Consensus algorithms such as Raft provides fault-tolerance by alllowing a system continue to operate as long as the majority member servers are available. For example, a Raft cluster of 5 servers can make progress even if 2 servers fail. It also appears to clients as a single node with strong data consistency always provided. All running servers can be used to initiate read requests for aggregated read throughput.
Dragonboat handles all technical difficulties associated with Raft to allow users to just focus on their application domains. It is also very easy to use, our step-by-step examples can help new users to master it in half an hour.
- Easy to use API for building Raft based applications in Go or C++
- Feature complete and scalable multi-group Raft implementation
- Fully pipelined and TLS mutual authentication support, ready for high latency open environment
- Custom Raft log storage and Raft RPC support, easy to integrate with latest I/O techs
- Optional Drummer server component for managing large number of Raft groups with high availability
- Extensively tested including using Jepsen's Knossos linearizability checker, some results are here
Dragonboat is the fastest open source multi-group Raft implementation on Github.
For 3-nodes system using mid-range hardware, e.g. 22 cores Intel Xeon at 2.8Ghz and enterprise NVME SSD (details here), Dragonboat can sustain at 9 million writes per second when the payload is 16bytes each or 11 million mixed I/O per second at 9:1 read:write ratio. High throughput is maintained in geographically distributed environment. When the RTT between nodes is 30ms, 2 million I/O per second can still be achieved using a much larger number of clients.
The number of concurrent active Raft groups affects the overall throughput as requests become harder to be batched. On the other hand, having thousands of idle Raft groups has a much smaller impact on throughput.
Table below shows write latencies in millisecond, Dragonboat has <5ms P99 write latency when handling 8 million writes per second at 16 bytes each. Read latency is lower than writes as the ReadIndex protocol employed for linearizable reads doesn't require fsync-ed disk I/O.
|Ops||Payload Size||99.9% percentile||99% percentile||AVG|
When tested on a single Raft group, Dragonboat can sustain writes at 1.25 million per second when payload is 16 bytes each, average latency is 1.3ms and the P99 latency is 2.6ms. This is achieved when using an average of 3 cores (2.8GHz) on each server.
As visualized below, Stop-the-World pauses caused by Golang's GC are sub-millisecond on highly loaded systems. Such very short Stop-the-World pause time is set to be further reduced by half in the coming Go 1.12 release. Golang's runtime.ReadMemStats reports that less than 1% of the available CPU time is used by GC on highly loaded system.
- x86_64 Linux or MacOS, Go 1.10 and 1.11, GCC or Clang with C++11 support
- RocksDB 5.13.4 or above
To download Dragonboat to your Go workspace:
$ go get -u -d github.com/lni/dragonboat
If RocksDB 5.13.4 or above has not been installed, use the following commands to install RocksDB 5.13.4 to /usr/local/lib and /usr/local/include.
$ cd $GOPATH/src/github.com/lni/dragonboat $ make install-rocksdb-ull
Run built-in tests to check the installation:
$ cd $GOPATH/src/github.com/lni/dragonboat $ make dragonboat-test
To build your application:
CGO_CFLAGS="-I/path/to/rocksdb/include" CGO_LDFLAGS="-L/path/to/rocksdb/lib -lrocksdb" go build -v pkgname
(Optional) To install the C++ binding:
$ cd $GOPATH/src/github.com/lni/dragonboat $ make binding $ sudo make install-binding
(Optional) Run C++ binding tests (gtest is required):
$ cd $GOPATH/src/github.com/lni/dragonboat $ make clean $ make test-cppwrapper
Dragonboat is production ready.
For reporting bugs, please open an issue. For contributing improvements or new features, please send in the pull request.
Dragonboat is licensed under the Apache License Version 2.0. See LICENSE for details.
Third party code used in Dragonboat and their licenses is summarized here.