- A functional k8s cluster.
- An Ingress provider (I use
ingress-nginx
, not to be confused withnginx-ingress
) - Secrets defined in
settings/.env.secrets
:user=zigbee2mqtt
user
the username to connect to your MQTT broker
password=CHANGEME
password
the password foruser
on your MQTT broker
network_key='[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]'
network_key
is an encryption key for your Zigbee network. You really shouldn't operate without it. It is a series of 16 digits. Each value can be between 0 and 15. You can randomly set each individual value to whatever you like. I do not recommend using all one number, or the sequence above. Please note that the array is encapsulated in single quotes. This is because it must be a string value to become a Secret. Don't worry, it will be properly rendered insecret.yaml
by our lovely initContainer.
It is important to note that each time the StatefulSet starts or restarts, the
initContainer called init-configuration
will test whether
/app/data/configuration.yaml
exists. If it does not exist, the ConfigMap
derived configuration.yaml
will be copied to /app/dataconfiguration.yaml
. If it
does exist, it will not be overwritten. Environment variable values will overwrite
what is in configuration.yaml
each time, but the file will otherwise never be
overwritten. This is in part because Zigbee2MQTT allows the end user to reconfigure
values in the configuration file via the UI, and these changes would otherwise not
be preserved between restarts.
The values in these files are used to set most needed values.
Required:
# You definitely need to change the password and network_key
user=zigbee2mqtt
password=CHANGEME
network_key=GENERATE or '[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]'
mqtt_server=mqtt://mqtt.mosquitto.svc.cluster.local:1883
serial_port=CHANGEME to /dev/tty.XXXXXX or tcp://10.0.0.1:6638
user
is the MQTT usernamepassword
is the MQTT passwordmqtt_server
is the MQTT server IP address or URLnetwork_key
is the Zigbee network key, if you are migrating. Otherwise you can leave this asGENERATE
and a random network key will be generated on first run. HUGE NOTE: Do not include spaces in the key. The secret sometimes gets munged by theinitContainer
that processes it. Keep it shorter for safety.serial_port
is the filesystem path to the serial port, e.g./dev/ttyACM0
, or the tcp socket definition for a network connection. HUGE NOTE: Do NOT use single or double quotes around this value. It will break.
Optional:
These are only used if you have enabled the APM component, which will only work if you are using an APM-enabled container image.
apm_token=CHANGEME
apm_url=CHANGEME
apm_token
is the Bearer authorization token used to connect to the APM URL.apm_url
is the URL of the APM server
timezone=UTC
log_level=info
baudrate=115200
ingress_cert_manager=letsencrypt-staging
ingress_fqdn=Z2M.EXAMPLE.COM
ingress_tls=z2m-tls-certificate
ingress_basic_auth_msg=Authentication Required - Zigbee2MQTT UI
Required:
timezone
is your local timezone in canonical format, e.g.America/Chicago
log_level
is the log level you want to setbaudrate
is the transmission speed of your Zigbee device
Optional:
ingress_cert_manager
is only used if you have enabled the TLS component. It is the cert manager you want to use fromcert-manager
.ingress_tls
is only used if you have enabled the TLS comonent. It is the name you wish to give to your TLS certificate. Arbitrary, used bycert-manager
.ingress_basic_auth_msg
is only used if you have enabled the basic_auth component. This message will be displayed when you attempt to visit the UI for Zigbee2MQTT, and basic auth credentials are requested.
This file will only be used if you enable the auth component.
z2muser:$apr1$MKqyDzD3$wuIpgAowG7NEi.uUAcCD50
This is an Apache htpasswd
file. You can create your own via:
htpasswd -c FILENAME USERNAME
New password:
Re-type new password:
Adding password for user USERNAME
More passwords can be added to the same file by omitting the -c
flag:
htpasswd FILENAME USER2
New password:
Re-type new password:
Adding password for user USER2
You absolutely should replace auth
with your own file!
If you want to test with the values in this file:
- Username:
z2muser
- Password:
zigbee2mqtt
This file will only be used if you enable the apm component, which will only work if you are using an APM enabled container image.
# You must change the container image to one that supports APM for these to work
node_env=production
apm_service_name=zigbee2mqtt
apm_service_node_name=z2m-node
apm_verify_cert=false
apm_disable_instrumentations=redis
node_env
must be set toproduction
in order for APM data to ship.apm_service_name
is theservice.name
which will be set for all data.apm_service_node_name
is the name assigned to the node. It is useful to manually set this here, otherwise it will be the container id.apm_verify_cert
should probably stay false here to guarantee your APM server's self-signed certificate can be used to secure traffic, even if you don't have the CA for it.apm_disable_instrumentation
allows you to put a comma-separated list of Node.js modules to ignore instrumentation data for. It can't be empty without an error resulting, soredis
is placed here because Zigbee2MQTT makes no use of Redis.
Some settings are documented in comments in the patch file. All settings are
documented at zigbee2mqtt.io. Do check in
patches/statefulset/settings.yaml
before including a setting that is already
being set as an environment variable.
Basic settings are documented in comments in the patch file. Any other settings are your own choice and responsibility.
In the event that you want to use an APM-enabled container image, you will need
to specify it here. At this time, the only supporting image is at
untergeek/zigbee2mqtt:1.41.0-apm
, in which the only changes from the out of the
box image of the same version are as follows:
- The top 5 lines of
index.js
are now:
require('elastic-apm-node').start({
active: process.env.NODE_ENV === 'production'
})
const semver = require('semver');
npm install elastic-apm-node --save && \
was injected at line 13 ofdocker/Dockerfile
such that this particularRUN
command now shows:
RUN apk add --no-cache --virtual .buildtools make gcc g++ python3 linux-headers git npm && \
npm install elastic-apm-node --save && \
npm ci --production --no-audit --no-optional --no-update-notifier && \
# Serialport needs to be rebuild for Alpine https://serialport.io/docs/9.x.x/guide-installation#alpine-linux
npm rebuild --build-from-source && \
apk del .buildtools
- The original line 40 of the Dockerfile has been commented out and an explanatory line added, such that it now reads:
# Set this in your Container definition, not here
# ENV NODE_ENV production
The reason being that NODE_ENV
should not be hard coded, in my opinion.
If NODE_ENV
is not set to production
, zigbee2mqtt will not attempt to send
APM data to the configured URL.
The resulting container image was published with this command:
docker buildx build \
--file ./docker/Dockerfile \
--platform linux/amd64 \
--tag untergeek/zigbee2mqtt:1.41.0-apm \
--build-arg VERSION=1.41.0-apm \
--build-arg COMMIT=acd5932c \
--push \
.
Making use of this image requires you to specify it in this patch file:
spec:
containers:
- name: zigbee2mqtt
image: untergeek/zigbee2mqtt:1.41.0-apm
If you specify this in the patch file, you can update it using images
in the
kustomization.yaml
file:
images:
- name: untergeek/zigbee2mqtt
newTag: 1.41.0-apm
This setting is documented inline in kustomization.yaml
, but it bears further
explanation here.
nodeName
will pin the pod to run only on the k8s node where your USB Zigbee controller is
inserted. This is only needed if you are using a USB dongle! For network-based
controllers, like the UZG-01, this setting is
unnecessary.
The nodeName
setting should be indented at the same level as the term containers
, e.g.
spec:
nodeName: my-k8-node-name
containers:
- name: zigbee2mqtt
Comments inline explain much. Feel free to add any extra environment variables as desired. The zigbee2mqtt documentation explains:
It is possible to override the values in
configuration.yaml
via environment variables. The name of the environment variable should start withZIGBEE2MQTT_CONFIG_
followed by the path to the property you want to set in uppercase split by a_
.In case you want to for example override:
mqtt: base_topic: zigbee2mqtt
set
ZIGBEE2MQTT_CONFIG_MQTT_BASE_TOPIC
to the desired value.
So, ideally, use environment variables where it makes sense. Some of those names will get really, really long, though:
spec:
containers:
- name: zigbee2mqtt
env:
- name: ZIGBEE2MQTT_CONFIG_MQTT_BASE_TOPIC
value: mytopic
If you are using a USB-based Zigbee device, you will need to uncomment and configure
these lines (i.e., set /dev/ttyACM0
to wherever your device is in /dev
).
volumeMounts:
- name: z2m-data
mountPath: /app/data
## Uncomment these if you need them
# - name: z2m-udev
# mountPath: /run/udev
# - name: zigbee-device
# mountPath: /dev/ttyACM0
## Uncomment these if you need them
# volumes:
# - name: z2m-udev
# hostPath:
# path: /run/udev
# - name: zigbee-device
# hostPath:
# path: /dev/ttyACM0
If you are uncommenting these lines, unless you only have 1 k8s node in your
cluster, you will also need to make use of the nodeName
directive already
described.
## Uncomment this if you need to change anything, like specifying a non-default
## storageClassName, or using more or less storage.
# volumeClaimTemplates:
# - metadata:
# name: z2m-data
# spec:
# storageClassName: MY_STORAGECLASS
# accessModes: [ "ReadWriteOnce" ]
# resources:
# requests:
# storage: 100Mi
You can change the namespace here.
namespace: zigbee2mqtt
Because we are using a StatefulSet here, you should create the namespace manually before applying. Having a namespace automatically be created and destroyed would defeat the retention of the persistent volume claim we want (and need) because deleting/destroying a namespace triggers the deletion of everything in it, which would include our PVC we want to retain.
This will apply the app label zigbee2mqtt
to everything being created.
commonLabels:
app: zigbee2mqtt
This section allows us to apply a new image to upgrade our StatefulSet without
having to edit the base
files.
images:
- name: koenkk/zigbee2mqtt
newTag: 1.40.1
- name: busybox
newTag: "1.36"
The busybox
image is used in our initContainer
that does an initial copy of
the configuration.yaml
ConfigMap to the persistent volume used by our StatefulSet.
Most of the patch files have already been covered already. This is where they are applied.
### Patches
patches:
## ConfigMap settings:
- patch:
path: patches/configmap/configuration.yaml
## StatefulSet settings:
- patch:
path: patches/statefulset/settings.yaml
Values extracted from the settings/*
files are applied in the various replacements
in the patches
as well as several of the components
.
-
(Optional) To preview generated configuration before deploying:
kubectl kustomize .
-
Run the following command to build and deploy:
kubectl apply -k .
-
Copy
kustomization.yaml
, and thesettings
,patches
, andreplacements
directories tooverlays/NAME
, or whatever directory structure you prefer. -
Each
overlays/NAME
should have its ownkustomization.yaml
,settings
,patches
, andreplacements
subdirectories. -
Be sure to update the
resources
section ofkustomization.yaml
to be able to reach thebase
directory:resources: - ../../base
-
Enable any
components
you may want inkustomization.yaml
by uncommenting them.
Apply the same configuration steps as above for each overlay/NAME
path you
create.
Use your judgment, but at the very least, you should probably ensure that you are using different values for these settings in:
namespace
-- A must. Absolutely necessary.
mqtt.client_id
to keep them unique.channel
numbers (prevent collisions)network_key
value (again, prevent collisions)
nodeName
, if you have multiple k8s nodes each running a USB Zigbee controller
-
(Optional) To preview generated configuration before deploying:
kubectl kustomize overlays/NAME
-
Run the following command to build and deploy:
kubectl apply -k overlays/NAME
Since this is a StatefulSet, the Persistent Volume should not be deleted in the event that you delete your StatefulSet. This is a deliberate choice. You should be able to backup or restore settings to this volume so long as it exists.
Unless you've modified the base
configuration, the pod name should always be
zigbee2mqtt-0
. If you've modified the StatefulSet and chosen a different container
name, then it will be that value followed by a -0
. For the sake of continuity,
this guide assumes that the pod name is zigbee2mqtt-0
.
This isn't backup so much as it is, "copy files from the volume attached to the pod."
The format of the command is:
kubectl -n NAMESPACE cp PODNAME:/PATH/TO/SOURCE /PATH/TO/DESTINATION
where
NAMESPACE
is the namespace wherePODNAME
is runningPODNAME
is the name of the pod/PATH/TO/SOURCE
is the path to the source directory (or specific file) on the pod./PATH/TO/DESTINATION
is the local path where you want the entire source directory, or the specific file to go.
This isn't restore so much as it is, "copy files to the volume attached to the pod."
The format of the command is:
kubectl -n NAMESPACE cp /PATH/TO/SOURCE PODNAME:/PATH/TO/DESTINATION
where
NAMESPACE
is the namespace wherePODNAME
is runningPODNAME
is the name of the pod/PATH/TO/SOURCE
is the path to the local directory (or specific file) to be copied to the pod./PATH/TO/DESTINATION
is the target path on the pod where you want the entire source directory (or specific file) to go.
Backup can be done while the zigbee2mqtt-0
pod is running. In fact, you cannot
backup data from the volume unless it is attached to a pod.
Using the defaults, our command might look like this (also creating the destination):
OUTPUT=./dump; \
mkdir -p $OUTPUT; \
kubectl -n zigbee2mqtt cp zigbee2mqtt-0:/app/data $OUTPUT
If we do this on a running system, the contents of ./dump
would look like:
$ ls -1 ./dump
configuration.yaml
coordinator_backup.json
database.db
devices.yaml
groups.yaml
secret.yaml
state.json
Using the defaults, our command might look like this:
FILENAME=configuration.yaml; \
kubectl -n zigbee2mqtt cp zigbee2mqtt-0:/app/data/$FILENAME $FILENAME.bak
This will copy configuration.yaml
from the pod to configuration.yaml.bak
in
the present working directory.
NOTE: You should never attempt to restore data to this volume while it is
attached to the zigbee2mqtt-0
pod. It is a running system, and overwriting or
changing files will undoubtedly lead to bad outcomes.
-
Ensure that the StatefulSet is not running.
-
Prepare the
mount_pvc.yaml
manifest in the same path as this README to mount the PVC in a separate pod. You shouldn't have to modify anything unless you have changed from the default inbase
. -
Apply the manifest, manually specifying the
namespace
:kubectl -n NAMESPACE apply -f mount_pvc.yaml
-
Ensure the pod is live, and connect to it:
kubectl -n NAMESPACE exec mount-pvc -ti -- ls /app/data
This should show the contents of the PVC are in the exact same mount point as they would be in the StatefulSet pod.
At this point you can either copy an entire file over, or a full directory. The procedure is slightly different for the full directory, so that will follow the single file copy example.
Using the defaults, our command might look like this:
kubectl -n NAMESPACE cp SOURCEFILE mount-pvc:/app/data/DESTFILE
This does allow for file renaming. For example:
kubectl -n NAMESPACE cp configuration.yaml.bak \
mount-pvc:/app/data/configuration.yaml
This will overwrite the existing configuration.yaml
.
This is trickier since you can't use the contents of an entire directory as the
SOURCEFILE. The command won't accept PATH/*
arguments. And just setting PATH
or PATH/
will copy PATH
to the DESTINATION
such that you will have
DESTINATION/PATH
instead of the contents of PATH
being at DESTINATION
.
The way around this is using tar streams. It's not as daunting as it sounds.
-
Navigate into the directory containing the contents you want to restore. You must be at the root of the path whose contents will be at
DESTINATION
, in our case,/app/data
on the pod. If you don't, you'll end up copying whatever is in the directory you are in. -
The command is this:
tar cf - . | kubectl -n NAMESPACE exec mount-pvc -i -- tar xf - -C /app/data
Let's break that down a bit.
tar cf - .
captures the present working directory,.
as a tar stream, which is piped to:kubectl
-n NAMESPACE
(our namespace)exec
(going to execute a command on a pod)mount-pvc
(the pod name)-i
(the execution will be interactive [the stream])--
(kubectl
now requires this to mean everything that follows will be part of the command and arguments to be executed in the pod)tar xf - -C /app/data
(extract the tar stream contents into/app/data
)
I ran a few experiments and this will overwrite existing files on the pod.
You can now verify what's in the volume:
-
ls
:kubectl -n NAMESPACE exec mount-pvc -ti -- ls -l /app/data # ... ls output follows
-
cat
:kubectl -n NAMESPACE exec mount-pvc -ti -- cat /app/data/FILENAME # ... contents of FILENAME follow
At this point, you know you've restored things, so you need to delete the
mount-pvc
pod by deleting the manifest:
kubectl -n NAMESPACE delete -f mount_pvc.yaml
This will delete the mount-pvc
pod, but will leave the volume intact.
You can now recreate your zigbee2mqtt StatefulSet, and it will use the files you restored as configuration data. Since we had to delete the StatefulSet in order to stop the pod to do the file restore, we need to re-apply our Kustomization:
kubectl apply -k .
or
kubectl apply -k overlays/NAME
I recently discovered Velero. I now use this to
be able to restore the entire z2m-data
persistent volume on a cron-timed basis
to my MinIO S3-compatible data store. I won't include all of the details of how
to install Velero, or schedule backups, or how to restore, leave alone how to set
up MinIO. If you have the drive to do all that, you can make good use of it for
more deployments than just Zigbee2MQTT. Once configured, this is literally all
you need to add as an annotation:
backup.velero.io/backup-volumes: z2m-data
In context, it looks like this:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zigbee2mqtt
spec:
template:
metadata:
annotations:
backup.velero.io/backup-volumes: z2m-data
The beauty of this is that you can restore to that point in time very easily. In
fact, it's much easier to restore with Velero than using the kubectl cp
method
shared above. It's just a lot more work and such to install MinIO or other S3. The
Velero install itself is stupid simple.