- Python3
You don't need to worry about installation of these tools.
Everything will be automatically installed in the ~/.tank/bin
directory when the first Run
object is created.
Optionally, create and activate virtualenv (assuming venv
is a directory of a newly-created virtualenv)
Linux:
sudo apt-get install -y python3-virtualenv
python3 -m virtualenv -p python3 venv
MacOS:
pip3 install virtualenv
python3 -m virtualenv -p python3 venv
After creating virtualenv and opening a terminal, activate virtualenv first to be able to work with Tank:
. venv/bin/activate
Alternatively, each time call the Tank executable directly: venv/bin/tank
.
pip3 install mixbytes-tank
The user config is stored in ~/.tank.yml
. It keeps settings which are the same for the current user regardless of the blockchain or the testcase used at the moment
(e.g., it tells which cloud provider to use no matter which blockchain you are testing).
The example can be found at docs/config.yml.example.
The user config contains cloud provider configurations, pointer to the current cloud provider, and some auxiliary values.
Please configure at least one cloud provider. The essential steps are:
- providing (and possibly creating) a key pair
- registering a public key with your cloud provider (if needed)
- specifying a cloud provider access token or credentials
We recommend creating a distinct key pair for benchmarking purposes. The key must not be protected with a passphrase. Make sure that the permissions of the private key are 0600 or 0400 (i.e. the private key is not accessible by anyone except the owner). The simplest way is:
ssh-keygen -t rsa -b 2048 -f bench_key
The command will create a private key file (bench_key
) and a public key file (bench_key.pub
).
The private key will be used to gain access to the cloud instances created during a run.
It must be provided to each cloud provider using the pvt_key
option.
The public key goes to cloud provider settings in accordance with the cloud provider requirements (e.g. GCE takes a file and DO - only a fingerprint).
A cloud provider is configured as a designated section in the user config.
The Digital Ocean section is called digitalocean
, the Google Compute Engine section - gce
.
The purpose of having multiple cloud provider sections at the time is to be able to quickly switch cloud providers using the provider
pointer in the tank
section.
There is a way to globally specify some Ansible variables for a particular cloud provider.
It can be done in the ansible
section of the cloud provider configuration.
Obviously, the values specified should be used in some blockchain bindings (see below).
The fact that the same variables will be passed to any blockchain binding makes this feature rarely used and low-level.
Each variable will be prefixed with bc_
before being passed to Ansible.
Note: these options affect only Tank logging. Terraform and Ansible won't be affected.
log.logging
: level
: sets the log level. Acceptable values are ERROR
, WARNING
(by default), INFO
, DEBUG
.
log.logging
: file
: sets the log file name (console logging is set by default).
A Tank testcase describes a benchmark scenario.
A simple example can be found at docs/testcase_example.yml.
Principal testcase contents are a current blockchain binding name and the configuration of instances.
Tank supports many blockchains by using a concept of binding. A binding provides an ansible role to deploy the blockchain (some examples here) and javascript code - to create load in the cluster (examples here). Similarly, databases use bindings to provide APIs to programming languages.
A binding is specified by its name, e.g.:
binding: polkadot
You shouldn't worry about writing or understanding a binding unless you want to add support of some blockchain to Tank.
A blockchain cluster consists of a number of different instance roles, e.g. full nodes and miners/validators. Available roles depend on the binding used.
A blockchain instances configuration is a set of role configurations. E.g., in the simplest case:
instances:
boot: 1
producer: 3
A role configuration is a number in the simplest case. The number specifies how many servers should be set up for the role to be installed.
instances:
producer: 3
Alternatively, a role configuration can be written as an object with various options - generally applicable and role configuration-specific.
instances:
boot:
count: 1
-
An option
count
specifies how many servers to set up with this role installed. -
An option
regions
sets a region configuration for the role configuration.
A region configuration provides region options for region names.
In the simplest case, a region configuration says how many role instances per region should be set up:
instances:
producer:
regions:
Europe: 4
Asia: 3
NorthAmerica: 3
A region name is one of the following: Europe
, Asia
, NorthAmerica
, random
, default
.
Europe
, Asia
, NorthAmerica
region names are self-explanatory.
random
region indicates that instances must be distributed evenly across available regions.
Region names are cloud provider-agnostic and can be configured in ~/.tank/regions.yml
(by default the predefined region config is copied and used at the moment of the first run creation).
In general, a region options can be written as a set of various options - that are generally applicable and region-specific.
count
region option specifies how many servers should be set up in the region.
Generally applicable options can be specified in a number of contexts: instances, role configuration, region configuration.
More local contexts have higher precedence over wrapping contexts,
e.g. an option specified in a role configuration takes precedence over the same option specified at the instances
level:
instances:
type: standard
boot:
count: 1
type: large
producer:
regions:
random: 10
The options are:
type
- an instance type, which is a cloud-agnostic machine size. Available types: micro (~1 GB mem), small (~2 GB mem), standard (4GB), large (8GB), xlarge (16GB), xxlarge (32GB), huge (64GB)packetloss
- simulates bad network operation and sets the percent of lost packets. Note: TCP ports 1..1024 are not packetloss-ed.
A simple geographically distributed test case - docs/testcase_geo_example.yml.
An example of utilizing generally applicable options and a region configuration can be found here docs/testcase_geo_advanced_example.yml.
There is a way to pass some Ansible variables from a testcase to a cluster.
This low-level feature can be used to tailor the blockchain for a particular test case.
Variables can be specified in the ansible
section of a testcase.
Each variable will be prefixed with bc_
before being passed to Ansible.
Deploy a new cluster via
tank cluster deploy <testcase file>
or
tank run <testcase file>
This command will create a cluster dedicated to the specified test case.
Such clusters are named runs in Tank terminology.
There can be multiple coexisting runs on a developer's machine.
Any changes to the testcase made after the deploy
command won't affect the run.
After the command is finished, you will see a listing of cluster machines and a run id, e.g.:
IP HOSTNAME
------------- -------------------------------------
167.71.36.223 tank-polkadot-db2d81e031a1-boot-0
167.71.36.231 tank-polkadot-db2d81e031a1-monitoring
167.71.36.222 tank-polkadot-db2d81e031a1-producer-0
165.22.74.160 tank-polkadot-db2d81e031a1-producer-1
Monitoring: http://167.71.36.231/
Tank run id: festive_lalande
You can also see the monitoring link - that's where all the metrics are collected (see below).
The cluster is up and running at this moment.
You can see its state on the dashboards or query cluster information via info
and inspect
commands (see below).
Tank uses grafana to visualize benchmark metrics. In order to access your grafana dashboard open the monitoring link in your browser.
Access to dashboard requires entering grafana username and password.
You can modify Grafana username and password in the in the ~/.tank.yml
configuration file (go to monitoring
in the tank
section).
If you have not defined these variables in your configuration file, type in 'tank' in the username and password fields.
You will see cluster metrics in the predefined dashboards.
You can query the metrics at http://{the monitoring ip}/explore
.
There can be multiple tank runs at the same time. The runs list and brief information about each run can be seen via:
tank cluster list
To list hosts of a cluster call
tank cluster info hosts {run id here}
To get a detailed cluster info call
tank cluster inspect {run id here}
Tank can run a javascript load profile on the cluster.
tank cluster bench <run id> <load profile js> [--tps N] [--total-tx N]
<run id>
- run ID
<load profile js>
- a js file with a load profile: custom logic which creates transactions to be sent to the cluster
--tps
- total number of generated transactions per second,
--total-tx
- total number of transactions to be sent.
In the simplest case, a developer writes logic to create and send transaction, and Tank takes care of distributing and running the code, providing the requested tps.
You can bench the same cluster with different load profiles by providing different arguments to the bench subcommand. The documentation on profile development can be found at https://github.com/mixbytes/tank.bench-common.
Binding parts responsible for benching can be found here.
Examples of load profiles can be found in profileExamples
subfolders, e.g. https://github.com/mixbytes/tank.bench-polkadot/tree/master/profileExamples.
Entire Tank data of a particular run (both in the cloud and on the developer's machine) will be irreversibly deleted:
tank cluster destroy <run id>
The cluster deploy
command actually does the following steps:
- init
- create
- dependency
- provision
These steps can be executed step by step or repeated. This is low-level tank usage. Tank does not check for the correct order or applicability of these operations if you run them manually.
For more information call tank cluster -h
It creates a run and prepares Terraform execution.
This is a read-only command that generates and shows an execution plan by Terraform.
The plan shows cloud resources that will be created during create
.
It creates a cluster in a cloud by calling Terraform for the run.
It installs necessary Ansible dependencies (roles) for the run.
It sets up all necessary software in a cluster by calling Ansible for the run.