Installs the Hedera Mirror Node Helm wrapper chart. This chart will install the mirror node components:
- Hedera Mirror Importer
- Hedera Mirror GRPC API
- Hedera Mirror Monitor
- Hedera Mirror REST API
- Hedera Mirror Rosetta API
- Hedera Mirror Web3 API
Set environment variables that will be used for the remainder of the document:
export RELEASE="mirror1"
To install the wrapper chart:
$ helm repo add hedera https://hashgraph.github.io/hedera-mirror-node/charts
$ helm upgrade --install "${RELEASE}" hedera/hedera-mirror
This chart supports automatic generation of random passwords. On initial installation, a secure, random password for
each chart component will be generated and stored into a Kubernetes secret. During upgrades, Helm
will lookup the existing
secret and ensure the passwords stay the same between upgrades. You can retrieve the generated passwords in
the mirror-passwords
Kubernetes secret. It's recommended to use ksd to
automatically base64 decode secrets.
kubectl get password mirror-passwords -o yaml | ksd
When running against a network other than a public network (e.g., demo, previewnet, testnet, or mainnet), the network must be updated with an initial address book file prior to deploying the chart.
- First acquire the address book file and encode its contents to Base64:
$ base64 --input ~/addressbook.bin
- Then populate the importer's
addressBook
property in the customvalues.yaml
with the Base64 output:
importer:
addressBook: CtYGGgUwLjAuN...
The mirror node chart uses the Traefik chart to manage access to cluster services through an Ingress and to route traffic through a load balancer. The implemented configuration uses a default self-signed certificate to secure traffic over TLS.
In production, it is advised to use a certificate authority signed certificate, and an external load balancer to allow for more secure and intricate load balancing needs. The following diagram illustrates a high level overview of the resources utilized in the recommended traffic flow.
When deploying in GCP, the following steps may be taken to use a container-native load balancer through a Standalone NEG.
-
Create a Kubernetes cluster utilizing a custom subnet.
This can be done by setting a unique name for the subnet in the UI or through the console with the following command
gcloud container clusters create mirrornode-lb \ --enable-ip-alias \ --create-subnetwork="" \ --network=default \ --zone=us-central1-a \ --cluster-version=1.21.5-gke.1802 \ --machine-type=n1-standard-4
-
Configure Traefik to use the external load balancer.
The following default production setup configures the Standalone NEG. It exposes port 443 for HTTPS based traffic. This load balancer is a GCP container-native load balancer through standalone NEG. Please modify for other cloud providers. Apply this config to your local values file (i.e.
custom.yaml
) for use in helm deployment:traefik: service: annotations: cloud.google.com/neg: '{"exposed_ports": {"443": {"name": "<tls_neg_name>"}}}'
Note: Ensure the NEG names are cluster unique to support shared NEGs across separate globally distributed clusters
The annotation will ensure that a NEG is created for each name specified, with the endpoints pointing to the Traefik pod IPs in your cluster on the configured port. These ports should match the ports exposed by Traefik in the common chart
.Values.traefik.ports
. -
Create a Google Managed Certificate for use by the Load Balancer
-
Create an External HTTPS load balancer and create a Backend Service(s) that utilizes the automatically created NEGs pointing to the traffic pods.
To verify the chart installation is successful, you can run the helm tests. These tests are not automatically executed
by helm on install/upgrade, they have to be executed manually. The tests require the operatorId
,
and operatorKey
properties be set in a local values file in order to execute, as well as network
if using an
environment other than testnet, and nodes
if using a custom environment.
To configure:
test:
config:
hedera:
mirror:
test:
acceptance:
network:
# Do not use use 0.0.2 or 0.0.50 for operator to ensure crypto transfers are not waived
operatorId:
operatorKey:
To execute:
helm test "${RELEASE} " --timeout 10m
All the public APIs can be accessed via a single IP. First, get the load balancer IP address:
export SERVICE_IP=$(kubectl get service "${RELEASE}-traefik" -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
To access the GRPC API (using grpcurl):
grpcurl -plaintext ${SERVICE_IP}:80 list
To access the REST API:
curl -s "http://${SERVICE_IP}/api/v1/transactions?limit=1"
To access the Rosetta API:
curl -sL -d '{"metadata":{}}' "http://${SERVICE_IP}/network/list"
To access the Web3 API:
curl -sL -H "Content-Type: application/json" -X POST -d '{"id": 1, "jsonrpc": "2.0", "method": "eth_blockNumber"}' "http://${SERVICE_IP}/web3/v1"
To view the Grafana dashboard:
kubectl port-forward service/${RELEASE}-grafana 8080:80 &
open "http://localhost:8080"
To remove all the Kubernetes components associated with the chart and delete the release:
helm delete "${RELEASE}"
The above command does not delete any of the underlying persistent volumes. To delete all the data associated with this release:
kubectl delete $(kubectl get pvc -o name)
To troubleshoot a pod, you can view its log and describe the pod to see its status. See the kubectl documentation for more commands.
kubectl describe pod "${POD_NAME}"
kubectl logs -f --tail=100 "${POD_NAME}"
kubectl logs -f --prefix --tail=10 -l app.kubernetes.io/name=importer
To change application properties without restarting, you can create a
ConfigMap
named hedera-mirror-grpc
or hedera-mirror-importer
and supply an application.yaml
or application.properties
.
Note that some properties that are used on startup will still require a restart.
echo "logging.level.com.hedera.mirror.grpc=TRACE" > application.properties
kubectl create configmap hedera-mirror-grpc --from-file=application.properties
Dashboard, metrics and alerts can be viewed via Grafana. See the Using section for how to connect to Grafana.
To connect to the database and run queries:
kubectl exec -it "${RELEASE}-postgres-postgresql-0" -c postgresql -- psql -d mirror_node -U mirror_node
A thread dump can be taken by sending a QUIT
signal to the java process inside the container. The thread dump output
will be visible via container logs.
kubectl exec "${POD_NAME}" -- kill -QUIT 1
kubectl logs -f "${POD_NAME}"
Prometheus AlertManager is used to monitor and alert for ongoing issues in the cluster. If an alert is received via a
notification mechanism like Slack or PagerDuty, it should contain enough details to know where to start the
investigation. Active alerts can be viewed via the AlertManager
dashboard in Grafana. To see further details or to
silence or suppress the alert it will need to be done via the AlertManager UI. To access the AlertManager UI, expose it
via kubectl:
kubectl port-forward service/${RELEASE}-prometheus-alertmanager 9093:9093 &
open http://localhost:9093