Official Dusk Node installer, an easy-to-use installer for running a Dusk node on the Dusk mainnet, Nocturne testnet and the Lunare devnet.
For more information checkout the node operator documentation on our docs.
- Operating System: Ubuntu 24.04 LTS x64
- Dependencies: OpenSSL 3, GLibc 2.38+
- Environment: Any compatible Linux environment (VPS, local, cloud instance)
The installer officially supports 24.04 LTS x64. While it has also been tested successfully on Ubuntu 24.10, official support is limited to the LTS version listed above. Compatibility with other versions may vary.
The installer comes with the following packages:
- Rusk service
- Rusk wallet CLI
The configuration files, binaries, services and scripts can be found in /opt/dusk/
.
The log files can be found in /var/log/rusk.log
and /var/log/rusk-recovery.log
.
To securely manage your node, it's highly recommended to use a dedicated non-root user (e.g., duskadmin
). Before running the Node Installer, ensure you have set up a dedicated user for managing your node and configured SSH access. This user should be part of the dusk
group to access node files and configurations.
Create a new non-root user (e.g., duskadmin
), add them to the dusk
group and set a password for the new user:
sudo groupadd --system dusk
sudo useradd -m -G dusk -s /bin/bash duskadmin
sudo passwd duskadmin
Ensure the new user has access to your SSH keys for secure login. Add your public key directly to the new user's authorized_keys
file:
- Edit or create the
authorized_keys
file for the new user:
mkdir -p /home/duskadmin/.ssh
sudo nano /home/duskadmin/.ssh/authorized_keys
- Paste your public SSH key (e.g., starting with
ssh-rsa
orssh-ed25519
) - Save and set proper permissions:
sudo chmod 700 /home/duskadmin/.ssh
sudo chmod 600 /home/duskadmin/.ssh/authorized_keys
sudo chown -R duskadmin:dusk /home/duskadmin/.ssh
If not already done, log in as root
or a user with sufficient privileges and add duskadmin
to the sudo
group:
sudo usermod -aG sudo duskadmin
Log out from you node for the group changes to take effect.
Test SSH access to the new user account by connecting to the node with the new account:
ssh duskadmin@<your-server-ip>
It's important to setup a firewall. A firewall controls incoming and outgoing traffic and ensures your system is protected.
You can use common tools like ufw
, iptables
, or firewalld
. At a minimum, the following ports should be open:
- The port you use for SSH (default:
22
) 9000/udp
for Kadcast (used for consensus messages)
If you're running an archive node or want to expose the HTTP server, you can also open the corresponding TCP port (default: 8080
).
If you're using ufw
, you can configure it with these commands:
# Allow SSH (default port 22)
sudo ufw limit ssh
# Allow Kadcast UDP traffic
sudo ufw allow 9000/udp
# Enable the firewall
sudo ufw enable
For non-default SSH ports or other firewall tools, adjust the commands accordingly.
ℹ️ To run the latest release of the Node Installer execute the following command:
curl --proto '=https' --tlsv1.2 -sSfL https://github.com/dusk-network/node-installer/releases/latest/download/node-installer.sh | sudo bash
curl --proto '=https' --tlsv1.2 -sSfL https://raw.githubusercontent.com/dusk-network/node-installer/main/node-installer.sh | sudo bash
By default, the installer runs the node for our mainnet. If you'd like to run a node for the Nocturne testnet or Lunare devnet, you can pass testnet
or devnet
as an option during installation:
curl --proto '=https' --tlsv1.2 -sSfL https://github.com/dusk-network/node-installer/releases/latest/download/node-installer.sh | sudo bash -s testnet
It is possible to run an archive node through the installer. By default, the installer will download a Provisioner node with proving capabilities. By setting a FEATURE
variable to archive
, it's possible to download an archive node binary:
curl --proto '=https' --tlsv1.2 -sSfL https://github.com/dusk-network/node-installer/releases/latest/download/node-installer.sh | FEATURE="archive" sudo bash
The installer comes with sane defaults, only requiring minimal configuration. Before the Rusk service can be started, the CONSENSUS_KEYS
and DUSK_CONSENSUS_KEYS_PASS
need to be provided.
The CONSENSUS_KEYS
can be either moved to /opt/dusk/conf/
from another system or generated on the node itself and moved there.
To generate the consensus keys locally, run rusk-wallet
and either create a new wallet or use a recovery phrase with rusk-wallet restore
.
To generate and export the consensus key-pair and put the .keys
file in the right directory with the right name, copy the following command:
rusk-wallet export -d /opt/dusk/conf -n consensus.keys
Run the following command and it will prompt you to enter the password for the consensus keys file:
sh /opt/dusk/bin/setup_consensus_pwd.sh
To remove old Rusk state and the old wallet cache, simply run:
ruskreset
Everything should be configured now and the node is ready to run. Use the following commands:
service rusk start
Check the status of the Rusk service by running:
service rusk status
To check your installer version, run:
ruskquery version
If you're running an outdated version of the installer, it will warn you and ask you to upgrade.
To significantly reduce the time required to sync your node to the latest published state, you can use the download_state
command. This command stops your node and replaces its current state with the latest published state from one of Dusk's archival nodes.
To see the available published states, run:
download_state --list
-
Stop your node (if it's running):
service rusk stop
-
Execute the fast sync command.
download_state
If you want to sync up with a specific state instead of the default one, you need to pass the block height of the state you want to syncup with.
download_state 369876
Follow the prompts to confirm the operation.
-
Restart your node:
service rusk restart
This process will ensure your node is up-to-date with the latest blockchain state, allowing you to sync faster and get back to participating in the network in less time.
Note
If you are experiencing errors in downloading the state, it might be due to some remnants of previous state syncing. Try to clean up with sudo rm /tmp/state.tar.gz
Check if your node is syncing, processing and accepting new blocks:
tail -F /var/log/rusk.log | grep "block accepted"
To check the latest block height:
ruskquery block-height