Mainnet Setup and Tooling
It's 🚀 time!
Last updated
Was this helpful?
It's 🚀 time!
Last updated
Was this helpful?
All of the hard work of the community has paid off, and now it's time to take the network live. Preparing your validator for mainnet involves a few extra considerations. They are detailed below, but a sensible checklist is:
How will you handle chain upgrades?
consider: Cosmovisor
How will you know your node is up?
consider: Monitoring and alerts
How will you mitigate DDOS attacks?
consider: Sentry Nodes
How much storage will you need?
Answering these questions can be daunting, so there is some advice below.
In order to streamline chain upgrades and minimise downtime, you may want to set up to manage your node. A in the Juno docs.
Backups of chain state are possible using the commands specified . If you are using a recent version of Cosmovisor, then the default configuration is that a state backup will be created before upgrades are applied. .
Alerting and monitoring is desirable as well - you are encouraged to explore solutions and find one that works for your setup. Prometheus is available out-of-the box, and there are a variety of open-source tools. Recommended reading:
Alerting:
Monitoring:
Using only the raw metrics endpoint provided by junod
you can get a working dashboard and alerting setup using Grafana Cloud. This means you don't have to run Grafana on the instance.
First, in config.toml
enable Prometheus. The default metrics port will be 26660
Download Prometheus - this is needed to ship logs to Grafana Cloud.
3. Set up a service file, with sudo nano /etc/systemd/system/prometheus.service
, replacing <your-user>
and <prometheus-folder>
with the location of Prometheus. This sets the Prometheus port to 6666
4. Enable and start the service.
5. Import a dashboard to your Grafana. Search for 'Cosmos Validator' to find several options. You should see logs arriving in the dashboard after a couple of minutes.
For more info:
Disk space is likely to fill up, so having a plan for managing storage is key.
If you are running sentry nodes:
1TB storage for the full node will give you a lot of runway
200GB each for the sentries with pruning should be sufficient
Managing backups is outside the scope of this documentation, but several validators keep public snapshots and backups.
It is anticipated that state-sync will soon work for wasm chains, although it does not currettly.
To give you an idea of cost, on AWS EBS (other cloud providers are available, or you can run your own hardware), with two backups a day, this runs to roughly:
$150 for 1TB
$35 for 200GB
Total cost: $220
What approach you take for this will depend on whether you are running on physical hardware co-located with you, running in a data centre, or running on virtualised hardware.
Tenderduty:
PANIC:
Create a prometheus.yml
file with your in the Prometheus folder. You can get these via the Grafana UI. Click 'details' on the Prometheus card:
If you are comfortable with server ops, you might want to build out a validator to protect against DDOS attacks.
The current best practice for running mainnet nodes is a Sentry Node Architecture. There are various approaches, as . Some validators advocate co-locating all three nodes in virtual partitions on a single box, using Docker or other virtualisation tools. However, if in doubt, just run each node on a different server.
Bear in mind that Sentries can have pruning turned on, as outlined . It is desirable, but not essential, to have pruning disabled on the validator node itself.