Ravel is a sharded, fault-tolerant key-value store built using BadgerDB and hashicorp/raft. You can shard your data across multiple clusters with multiple replicas, the data is persisted on disk using BadgerDB for high throughput in reads and writes. Replication and fault-tolerance is done using Raft.
Ravel exposes a simple HTTP API for the user to read and write data and Ravel handles the sharding and the replication of data across clusters.
- Installation
- Usage
- Setup a Cluster
- Reading and Writing Data
- Killing A Ravel Instance
- Uninstalling Ravel
- Documentation and Further Reading
- Contributing
- Contact
- License
Ravel has two functional components. A cluster admin server and a replica node, both of them have their separate binary files. To setup Ravel correctly, you'll need to start one cluster admin server and many replica nodes as per requirement.
This will download the ravel_node
and ravel_cluster_admin
binary files and move it to /usr/local/bin
, make sure
you have it in your $PATH
curl https://raw.githubusercontent.com/adityameharia/ravel/main/install.sh | bash
cmd/ravel_node
directory has the implementation ofravel_node
which is the replica nodecmd/ravel_cluster_admin
directory has the implementation ofravel_cluster_admin
which is the cluster admin server
- Clone this repository
git clone https://github.com/adityameharia/ravel
cd ravel
git checkout master
- Build
ravel_node
andravel_cluster_admin
cd cmd/ravel_node
go build
sudo mv ./ravel_node /usr/local/bin
cd ../ravel_cluster_admin
go build
sudo mv ./ravel_cluster_admin /usr/local/bin
This will build the ravel_node
and ravel_cluster_admin
binaries in cmd/ravel_node
and cmd/ravel_cluster_admin
respectively and move them to /usr/local/bin
Usage info for ravel_cluster_admin
$ ravel_cluster_admin --help
NAME:
Ravel Cluster Admin - Start a Ravel Cluster Admin server
USAGE:
ravel_cluster_admin [global options] command [command options] [arguments...]
COMMANDS:
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--http value Address (with port) on which the HTTP server should listen
--grpc value Address (with port) on which the gRPC server should listen
--backupPath value Path where the Cluster Admin should persist its state on disk
--help, -h show help
Usage info for ravel_node
$ ravel_node --help
NAME:
Ravel Replica - Manage a Ravel replica server
USAGE:
ravel_node [global options] command [command options] [arguments...]
COMMANDS:
start Starts a replica server
kill Removes and deletes all the data in the cluster
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--help, -h show help (default: false)
Usage info for the start
command in ravel_node
. Use this command to start a replica server.
$ ravel_node start --help
NAME:
ravel_node start - Starts a replica server
USAGE:
ravel_node start [command options] [arguments...]
OPTIONS:
--storagedir value, -s value Storage Dir (default: "~/ravel_replica")
--grpcaddr value, -g value GRPC Addr of this replica (default: "localhost:50000")
--raftaddr value, -r value Raft Internal address for this replica (default: "localhost:60000")
--adminrpcaddr value, -a value GRPC address of the cluster admin (default: "localhost:42000")
--yaml value, -y value yaml file containing the config
--leader, -l Register this node as a new leader or not (default: false)
--help, -h show help (default: false)
Executing the following instructions will setup a sample Ravel instance. The most simple configuration of a Ravel instance would consist of 2 clusters with 3 replicas each.
The key value pairs will be sharded across the two clusters and replicated thrice on each cluster. The admin will automatically decide which replica goes to which cluster. Adding and removing clusters from the system automatically relocates all the keys in that cluster to some other one. Deleting the last standing cluster deletes all the keys in the instance.
- Setup the cluster admin server
sudo ravel_cluster_admin --http="localhost:5000" --grpc="localhost:42000" --backupPath="~/ravel_admin"
- Setting up the cluster leaders
sudo ravel_node start -s="/tmp/ravel_leader1" -l=true -r="localhost:60000" -g="localhost:50000" -a="localhost:42000"
sudo ravel_node start -s="/tmp/ravel_leader2" -l=true -r="localhost:60001" -g="localhost:50001" -a="localhost:42000"
- Setting up the replicas
sudo ravel_node start -s="/tmp/ravel_replica1" -r="localhost:60002" -g="localhost:50002" -a="localhost:42000"
sudo ravel_node start -s="/tmp/ravel_replica2" -r="localhost:60003" -g="localhost:50003" -a="localhost:42000"
sudo ravel_node start -s="/tmp/ravel_replica3" -r="localhost:60004" -g="localhost:50004" -a="localhost:42000"
sudo ravel_node start -s="/tmp/ravel_replica4" -r="localhost:60005" -g="localhost:50005" -a="localhost:42000"
NOTE
-l=true
sets up a new cluster, defaults to false- Dont forget the storage directory as you will need it to delete the replica
- All the commands and flag can be viewed using the
-h
or--help
flag
Once the replicas and admin are set up, we can start sending HTTP requests to our cluster admin server to read, write and delete key value pairs.
The cluster admin server exposes 3 HTTP routes:
-
URL:
/put
- Method:
POST
- Description: Store a key value pair in the system
- Request Body:
{"key": "<your_key_here>", "val":<your_value_here>}
key = [string]
val = [string | float | JSON Object]
- Success Response:
200
with body{"msg": "ok"}
- Method:
-
URL:
/get
- Method:
POST
- Description: Get a key value pair from the system
- Request Body:
{"key": "<your_key_here>"
key = [string]
- Success Response:
200
with body{"key": <key>, "val":<value>}
- Method:
-
URL:
/delete
- Method:
POST
- Description: Delete a key value pair from the system
- Request Body:
{"key": "<your_key_here>"
key = [string]
- Success Response:
200
with body{"msg": "ok"}
- Method:
- Sample
/put
requests
{
"key": "the_answer",
"value": 42
}
{
"key": "dogegod",
"value": "Elon Musk"
}
{
"key": "hello_friend",
"value": {
"elliot": "Rami Malek",
"darlene": "Carly Chaikin"
}
}
- Sample
/get
request
{
"key": "dogegod"
}
- Sample
/delete
request
{
"key": "dogegod"
}
Stopping a ravel instance neither deletes the data/configuration nor removes it from the system, it replicates a crash with the hope that the node will come back up. Once the node is up, it will sync up all the data from the leader node.
In order to delete all the data and configuration and remove the instance from the system you need to kill it.
ravel_node kill -s="the storage directory you specified while starting the node"
Stopping the ravel_cluster_admin breaks the entire system and renders it useless. It is recommended not to stop/kill the admin unless all the replicas have been properly killed.
The cluster admin server persists its state on disk for recovery. In order to truly reset it, you have to delete its storage directory.
Ravel can be uninstalled by deleting the binaries from /usr/local/bin
sudo rm /usr/local/bin/ravel_node
sudo rm /usr/local/bin/ravel_cluster_admin
- API Reference : https://pkg.go.dev/github.com/adityameharia/ravel
- In order to read about the data flow of the system refer to data flow in admin and data flow in replica
- Each package also has its own readme explainin what it does and how it does it.
- Other blogs and resources
If you're interested in contributing to Ravel, check out CONTRIBUTING.md
Reach out to the authors with questions, concerns or ideas about improvement.
Copyright (c) Aditya Meharia and Junaid Rahim. All rights reserved. Released under the MIT License