Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker environment variables are not passed to borgmatic #320

Open
DamienVicet opened this issue Apr 20, 2024 · 20 comments
Open

Docker environment variables are not passed to borgmatic #320

DamienVicet opened this issue Apr 20, 2024 · 20 comments

Comments

@DamienVicet
Copy link

Hello, I use borgmatic with docker-compose to backup my Nextcloud installation. You can find my setup repository here : https://gitlab.com/dvicet/nextcloud-collabora-docker-and-borg-backup-scripts

Here is how I launch borgmatic in my compose file :

services:
  borgmatic:
    image: ghcr.io/borgmatic-collective/borgmatic:1.8
    command: ["--verbosity", "1", "--list", "--stats"]
    container_name: borgmatic-nextcloud
    volumes:
      - nextcloud:/mnt/source/nextcloud:ro
      - data:/mnt/source/data:ro
      - ./borgmatic-config.yml:/etc/borgmatic.d/  # borgmatic config file(s)
      - ./ssh:/root/.ssh                          # ssh key for remote repositories
      - borgcache:/root/.cache/borg               # checksums used for deduplication
    environment:
      TZ: ${TIMEZONE}
      BORG_REPO: ${BORG_REPO}
      BORG_PASSPHRASE: ${BORG_PASSPHRASE}
      DB_PASSWORD: ${DB_PASSWORD}

Since borgmatic 1.8.10, firstly I need to change command to ["borgmatic", "--verbosity", "1", "--list", "--stats"] to make borgmatic work. And then I got this error :

borgmatic-nextcloud  | CRITICAL:None:Cannot find variable DB_PASSWORD in environment

It seems that my environment variable DB_PASSWORD it not passed to the container.

Is it a bug or am I missing something with my configuration ?

Thank you in advance !

@modem7
Copy link
Member

modem7 commented Apr 24, 2024

Is there any particular reason you're using command: ["borgmatic", "--verbosity", "1", "--list", "--stats"] instead of using cron?

@mo-krauti
Copy link

mo-krauti commented Apr 25, 2024

I also noticed, that when setting the BORG_PASSPRASE variable, borg still asks me for my passphrase.
I tried to use DEBUG_SECRETS, which did not change anything, so i rebuilt the image to always print the secrets and the passphrase I set using my environment showed up. It seems that the environment inside the container is completely broken.

image version is 1.8.10
using docker compose from this repo and example command from readme:

docker-compose run --rm borgmatic borgmatic list

@modem7
Copy link
Member

modem7 commented Apr 25, 2024

I also noticed, that when setting the BORG_PASSPRASE variable, borg still asks me for my passphrase. I tried to use DEBUG_SECRETS, which did not change anything, so i rebuilt the image to always print the secrets and the passphrase I set using my environment showed up. It seems that the environment inside the container is completely broken.

image version is 1.8.10 using docker compose from this repo and example command from readme:

docker-compose run --rm borgmatic borgmatic list

Could you paste your entire docker-compose.yaml here please (obviously make sure to change the password to something else).

When I check the environment with a test password, it's certainly showing the variable being passed through.

docker exec -it Borgmatic /bin/bash
adb573f9a6be:/# echo $BORG_PASSPHRASE
594c03cb656e5202c96ae0b7eabad401ba32f94f595518084773dc7caecc6c82924d7e5fb6e7216a3e7304817468103c39c14d2fa7f776a679bf2f3918ce9f9a
adb573f9a6be:/#  

We've also just built a new image about 4 hours ago which better deals with container variables being passed through to the application itself. Please pull the latest version.

We're still in the process of overhauling the documentation.

This is my current compose file (a bit more complex, but you can see the definitions):

  #############
  ##Borgmatic##
  #############

  borgmatic:
    # image: modem7/borgmatic-docker
    # image: borgmatic:test
    image: ghcr.io/borgmatic-collective/borgmatic
    container_name: Borgmatic
    environment:
      TZ: $TZ
      BORG_PASSPHRASE: $BORG_PASSPHRASE
      BORG_SOURCE_1: $BORG_SOURCE_1
      BORG_SOURCE_2: $BORG_SOURCE_2
      BORG_REPO: $BORG_REPO
      BORG_HEALTHCHECK_URL: $BORG_HEALTHCHECK_URL
      # EXTRA_PKGS: postgresql16-client rclone coreutils jq
      DOCKERCLI: true
      CRON: $BORG_CRON
      CRON_COMMAND: $BORG_CRON_COMMAND
      # EXTRA_CRON: |-
      #   @daily borgmatic check -v 1 2>&1 > tee /mnt/log/check-$(date +\%Y-\%m-\%dT\%H:\%M:\%S).log
      #   0 7 1 * * command2
    logging:
      driver: "local"
      options:
        max-size: 10m
        max-file: "3"
    volumes:
      - $BORGHOMESOURCEDIR:/mnt/source/
      # - $CRONTAB:/mnt/source/Cron
      # - Pihole:/mnt/source/Pihole/Pihole
      # - Dnsmasq:/mnt/source/Pihole/Dnsmasq
      - $BORGSERVBACKUPDIR/Database:/mnt/borg-DBrepository
      - $BORGSERVBACKUPDIR/Docker:/mnt/borg-repository
      - $RAMDRIVEBACKUP/borg:/mnt/ramdrive
      - $USERDIR:/mnt/source/DockerApps/
      - $USERDIR/Borgmatic/borgmatic.d/:/etc/borgmatic.d/
      - $USERDIR/Borgmatic/.config/borg/:/root/.config/borg
      - $USERDIR/Borgmatic/.ssh/:/root/.ssh
      - $USERDIR/Borgmatic/.state/:/root/.borgmatic
      - $USERDIR/Borgmatic/.cache/borg/:/root/.cache/borg
      - $DOCKERDIR/HDA/.env:/mnt/docker/HDA/.env:ro
      - $DOCKERDIR/HDA/docker-compose.yml:/mnt/docker/HDA/docker-compose.yml:ro
      - $BORGSCRIPTS:/borgscripts:ro
      - /var/run/docker.sock:/var/run/docker.sock # So we can run scripts
      # - /mnt/downloads/script/:/custom-cont-init.d:ro
    networks:
      isonet:
      isolated:
    restart: always

And no issues with borgmatic list:

❯ docker exec Borgmatic sh -c "borgmatic list"
ANSWER:borgmatic.actions.list:Borgbase: Listing archives
ANSWER:borgmatic.execute:backup-2022-12-31T05:31:56           Sat, 2022-12-31 05:31:57 [89f8467d3910f538f8c25253b4e0e9dc3ae5d20602f87b19e4ae24abf6ad805b]
ANSWER:borgmatic.execute:backup-2023-03-27T05:02:56           Mon, 2023-03-27 05:02:58 [9a0db04d845614cfc497213617718dc989c72ecb0d952f0d41bc100e078914f3]
ANSWER:borgmatic.execute:backup-2023-04-30T05:03:08           Sun, 2023-04-30 05:03:09 [d1a2df1f661e6e5e3e26661646ddb0579b1da6dc63338d5fb6cfb34976cde2f0]
ANSWER:borgmatic.execute:backup-2023-05-31T05:00:11           Wed, 2023-05-31 05:00:13 [429c6ee4eb60f4d5ed7fd3e92b3d595f3696323b5d88a49ed0862b432d2f33bb]
ANSWER:borgmatic.execute:backup-2023-06-30T05:00:11           Fri, 2023-06-30 05:00:13 [91c2275c8242d68695debca794fa3579cbdcb89966af107ed07e393def8c4dbe]
ANSWER:borgmatic.execute:backup-2023-07-31T05:00:11           Mon, 2023-07-31 05:00:12 [0f02347c07b0538fa5b3e57d0dea8568394668fd56dcc90b1786a94f30c65dae]
ANSWER:borgmatic.execute:backup-2023-08-31T05:00:12           Thu, 2023-08-31 05:00:13 [6e731b7d0500a9ccc904d1d8d0e8d591641a6f6b4d83d4eb1c51ebb8c99a2ec6]
ANSWER:borgmatic.execute:backup-2023-09-28T05:01:06           Thu, 2023-09-28 05:01:08 [edc729f1cbe90dff19cd2c690439651402d14c72c2a1d295ae554477c6a015d9]
ANSWER:borgmatic.execute:backup-2023-10-31T05:01:05           Tue, 2023-10-31 05:01:06 [1018efb367f482d6e8a881150ee429ada318b7be415ad442090a47c14bd79c10]
ANSWER:borgmatic.execute:backup-2023-11-30T05:01:04           Thu, 2023-11-30 05:01:06 [7f2e67f2a4b5a4d8aaecde8f7adf4604010238de55485700ffcb20731e54ba34]
ANSWER:borgmatic.execute:backup-2023-12-31T05:01:06           Sun, 2023-12-31 05:01:07 [0d4f52226d5394ce1999a81dd09a8c4639aed4a0d2e7b11408072d3fe98d01c9]
ANSWER:borgmatic.execute:backup-2024-01-31T05:01:04           Wed, 2024-01-31 05:01:05 [485b4e389ccd6f0a79d1fa3d7464afce6eb17a0c9826322dbaf5dab8cf37fc09]
ANSWER:borgmatic.execute:backup-2024-02-29T05:01:05           Thu, 2024-02-29 05:01:06 [3fe5398a98f375b8fb965728620a4a34474c6e5944f64be5069ab22c9dd136d1]
ANSWER:borgmatic.execute:backup-2024-03-24T05:01:12           Sun, 2024-03-24 05:01:14 [0e020d9a6688f7a04f6cd7cfd2e49519e0cc26a9b680453cb9d7e981241ec56b]
ANSWER:borgmatic.execute:backup-2024-03-31T05:01:05           Sun, 2024-03-31 05:01:06 [02887ab8ff34f7779e99cde6131f95acdba8751b9e9463840a0e4f702a6bf576]
ANSWER:borgmatic.execute:backup-2024-04-07T05:01:04           Sun, 2024-04-07 05:01:05 [96fa0bbc26126e1ba60717c2efe6df1a6e2b2b6f5d67d4a86c70a24338246120]
ANSWER:borgmatic.execute:backup-2024-04-14T05:01:03           Sun, 2024-04-14 05:01:04 [0bf0e50e7beb9f6223b6c6d632373750c7681dbf0bb624692b668b611fc8b4b4]
ANSWER:borgmatic.execute:backup-2024-04-19T05:01:04           Fri, 2024-04-19 05:01:06 [27678b38462f20d6cd447df6156ed5451b14e95245238d65ce010856ee80c20e]
ANSWER:borgmatic.execute:backup-2024-04-20T05:01:04           Sat, 2024-04-20 05:01:05 [15104c9a40f6e423f4fd7a36f4ee2352561a14b3da73e789100131a9cf732e4d]
ANSWER:borgmatic.execute:backup-2024-04-21T05:01:03           Sun, 2024-04-21 05:01:05 [c699dc8abf86c2fa506504029d8586e3d2dcd24424d006cc8c63968628b65b25]
ANSWER:borgmatic.execute:backup-2024-04-22T05:01:04           Mon, 2024-04-22 05:01:05 [97ae31953e626be23c60758b0be06c2c2a6ce2683a812d347db085974353a611]
ANSWER:borgmatic.execute:backup-2024-04-23T09:28:49           Tue, 2024-04-23 09:28:51 [1ac6644eb383894a331b86d6005f4e6a5ccf29cf86e1603a1835f4fa4654ab85]
ANSWER:borgmatic.execute:backup-2024-04-24T05:01:04           Wed, 2024-04-24 05:01:05 [45295dc0e7ad177e03f741c79a36003de3328ed843e7aca0854ef03893b933a0]
ANSWER:borgmatic.execute:backup-2024-04-25T08:14:03           Thu, 2024-04-25 08:14:04 [1f9b23e44ed94464175d8d7a1ecde1027a7df72e8954008f8d4495038a1c76b9]

@DamienVicet
Copy link
Author

DamienVicet commented May 1, 2024

I am not using cron inside the container but I launch the container with a script on the host machine like suggested in the readme : https://github.com/borgmatic-collective/docker-borgmatic?tab=readme-ov-file#run-borgmatic-like-a-binary-through-a-container

I digged a bit more, it seems related to s6. I don't know exactly how it works but with env variable S6_KEEP_ENV I am able to use my env variables. But there is another problem : the exit code of borgmatic is not sent to the container so my script is not able to know if the backup failed or succeeded.

An alternative solution is to change the entrypoint of the container instead of using a command.

I finally changed my compose file to this :

services:
   # Backups with Borg
  borgmatic:
    image: ghcr.io/borgmatic-collective/borgmatic:1.8.11
    entrypoint: borgmatic --verbosity 1 --list --stats
    container_name: borgmatic-nextcloud
    volumes:
      - nextcloud:/mnt/source/nextcloud:ro
      - data:/mnt/source/data:ro
      - ./borgmatic-config.yml:/etc/borgmatic.d/  # borgmatic config file(s)
      - ./ssh:/root/.ssh                          # ssh key for remote repositories
      - borgcache:/root/.cache/borg               # checksums used for deduplication
    environment:
      TZ: ${TIMEZONE}
      BORG_REPO: ${BORG_REPO}
      BORG_PASSPHRASE: ${BORG_PASSPHRASE}
      DB_PASSWORD: ${DB_PASSWORD}

@modem7
Copy link
Member

modem7 commented May 1, 2024

Changing the entrypoint from /init will stop S6 from running, and won't allow s6 to run as PID1.

If that's what you wish, then that's certainly something you can do, and would bypass all the run scripts, e.g. https://github.com/borgmatic-collective/docker-borgmatic/blob/master/root/etc/s6-overlay/s6-rc.d/svc-cron/run.

You can modify how S6 behaves if that's required for your particular setup, here are some options:

https://github.com/just-containers/s6-overlay?tab=readme-ov-file#customizing-s6-overlay-behaviour

ENV vars are currently sent to the cron process in question rather than the entire container, so S6_KEEP_ENV certainly would work for you (you can set this in your own compose file).

As stated above, the readme does need redoing as there is a lot of legacy things there which need an overhaul, unfortunately time is always an issue. We're happy to receive PR's to assist.

Exit codes certainly seem to be sent and received (although I suspect that this may be from the cron service rather than the Borgmatic service, as Borgmatic isn't a background service by default, whereas cron is).

For edge cases of running Borgmatic as a binary through Docker, we'd probably have to rethink a few things, but it wouldn't be too much of an issue, just would require resources allocated to it, it's currently only really been fully tested to run as a cron service rather than a single binary. But calling the binary directly rather than utilising S6 in any fashion may be the better solution for yourself.

@mo-krauti
Copy link

When setting S6_KEEP_ENV to 1, running the binary from the container works as expected. Thank you

@johnmmcgee
Copy link

johnmmcgee commented Jun 2, 2024

Hello All,

I too, am a user that directly calls the binary. I tried to do a direct call to the binary (borgmatic --verbose 1 --stats --list) and noticed that it is no longer passing environment variables ( I kept getting authentication errors ). I did try with S6_KEEP_ENV and it does not seem to work. I have temporarily reverted back to 1.8.9 for the time being.

Here is a copy of my quadlet (note i did change exec on 1.8.10+ to Exec=borgmatic --verbosity 1 --list --stats):

# borgmatic.container
[Unit]
Description=borgmatic container

[Container]
Image=ghcr.io/borgmatic-collective/borgmatic:1.8.9
ContainerName=borgmatic
AutoUpdate=registry
DNSSearch=dns.podman
Environment=BORG_RSH="ssh -i /root/.ssh/key"
Environment=BORG_PASSPHRASE_FILE="/root/.config/borg/pkey"
Environment=RUN_ON_STARTUP=true
Exec=--verbosity 1 --list --stats
SecurityLabelDisable=true
Volume=/etc/localtime:/etc/localtime:ro
Volume=/var/srv/containers/borgmatic/restore:/restore:z
Volume=/var/srv/containers/borgmatic/etc_borgmatic.d:/etc/borgmatic.d:Z
Volume=/var/srv/containers/borgmatic/config_borg:/root/.config/borg:Z
Volume=/var/srv/containers/borgmatic/ssh:/root/.ssh:Z
Volume=/var/srv/containers/borgmatic/cache_borg:/root/.cache/borg:Z

[Service]
Type=oneshot
RemainAfterExit=yes
Restart=on-failure
RestartSec=10s

[Install]
WantedBy=multi-user.target default.target

@modem7
Copy link
Member

modem7 commented Jun 2, 2024

Hello All,

I too, am a user that directly calls the binary. I tried to do a direct call to the binary (borgmatic --verbose 1 --stats --list) and noticed that it is no longer passing environment variables ( I kept getting authentication errors ). I did try with S6_KEEP_ENV and it does not seem to work. I have temporarily reverted back to 1.8.9 for the time being.

Here is a copy of my quadlet (note i did change exec on 1.8.10+ to Exec=borgmatic --verbosity 1 --list --stats):

# borgmatic.container
[Unit]
Description=borgmatic container

[Container]
Image=ghcr.io/borgmatic-collective/borgmatic:1.8.9
ContainerName=borgmatic
AutoUpdate=registry
DNSSearch=dns.podman
Environment=BORG_RSH="ssh -i /root/.ssh/key"
Environment=BORG_PASSPHRASE_FILE="/root/.config/borg/pkey"
Environment=RUN_ON_STARTUP=true
Exec=--verbosity 1 --list --stats
SecurityLabelDisable=true
Volume=/etc/localtime:/etc/localtime:ro
Volume=/var/srv/containers/borgmatic/restore:/restore:z
Volume=/var/srv/containers/borgmatic/etc_borgmatic.d:/etc/borgmatic.d:Z
Volume=/var/srv/containers/borgmatic/config_borg:/root/.config/borg:Z
Volume=/var/srv/containers/borgmatic/ssh:/root/.ssh:Z
Volume=/var/srv/containers/borgmatic/cache_borg:/root/.cache/borg:Z

[Service]
Type=oneshot
RemainAfterExit=yes
Restart=on-failure
RestartSec=10s

[Install]
WantedBy=multi-user.target default.target

Instead of Exec=--verbosity 1 --list --stats try entrypoint=....

@johnmmcgee
Copy link

Instead of Exec=--verbosity 1 --list --stats try entrypoint=....

Entrypoint isnt a documented quadlet option.

https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html

@modem7
Copy link
Member

modem7 commented Jun 2, 2024

Ah, unfortunately this has not been tested on podman, nor do any of us really have any understanding of potential commands for it.

I'd recommend having a look to see how to replace the entrypoint command as stated here within podman.

@johnmmcgee
Copy link

Ah, unfortunately this has not been tested on podman, nor do any of us really have any understanding of potential commands for it.

I'd recommend having a look to see how to replace the entrypoint command as stated here within podman.

I didnt expect as such. The issue seems to have sprung up since the change to s6. My current solution is to keep my version at the version prior to s6 (1.8.9), until a solution is put in place for people using the container to launch the borgmatic directly, and not leave it running all of the time. I prefer to use systemd timers. I will be keeping an eye on this thread and I will be more than happy to test when that comes out.

@modem7
Copy link
Member

modem7 commented Jun 2, 2024

In all fairness, given that the entrypoint in Docker can be replaced quite easily with a command, and it seems that you should be able to replace that in podman as well (https://docs.podman.io/en/latest/markdown/podman-run.1.html#entrypoint-command-command-arg1), the fact that running the binary is such an edge case, it shouldn't be too much of a problem just overriding S6 for your particular requirements.

I'm happy to hear reasons against this, or a PR to be raised so it can be discussed further.

But S6 currently solves a lot more problems than it is causing from my understanding, especially with process handling.

@grantbevis - any thoughts from your end?

@johnmmcgee
Copy link

johnmmcgee commented Jun 2, 2024

In all fairness, given that the entrypoint in Docker can be replaced quite easily with a command, and it seems that you should be able to replace that in podman as well (https://docs.podman.io/en/latest/markdown/podman-run.1.html#entrypoint-command-command-arg1), the fact that running the binary is such an edge case, it shouldn't be too much of a problem just overriding S6 for your particular requirements.

I'm happy to hear reasons against this, or a PR to be raised so it can be discussed further.

But S6 currently solves a lot more problems than it is causing from my understanding, especially with process handling.

@grantbevis - any thoughts from your end?

It could just be the way I am launching it. So the issue I had, was when I would launch it, it no longer seemed to be taking the BORG_PASSPHRASE_FILE environmental value. Hence why i assumed, based on some of the comments here, that this may fall in line with the same issue. I have some time today, so i am going to play around with it a bit more to see if i can get it resolved on my end.

@johnmmcgee
Copy link

johnmmcgee commented Jun 3, 2024

I was able to retool my configuration and get this to work:

S6_KEEP_ENV=1 as an environment value
I moved my borg password to a secret and made the following changes to my configuration file to remove them as environmental values:

ssh_command: ssh -i /root/.ssh/rnet
encryption_passphrase: ${BORG_PASSPHRASE}

this all seems to be working as expected now. Latest image as well.

@modem7
Copy link
Member

modem7 commented Jun 3, 2024

I was able to retool my configuration and get this to work:

S6_KEEP_ENV=1 as an environment value
I moved my borg password to a secret and made the following changes to my configuration file to remove them as environmental values:

ssh_command: ssh -i /root/.ssh/rnet
encryption_passphrase: ${BORG_PASSPHRASE}

this all seems to be working as expected now. Latest image as well.

Thanks for the reply with the solution.

I'll go through this ticket when I get some time and collate the multiple fixes into a troubleshooting section of the readme.

@johnmmcgee
Copy link

No worries. i feel the issue was user error. I appreciate your guys work.

@modem7
Copy link
Member

modem7 commented Jun 3, 2024

No worries. i feel the issue was user error. I appreciate your guys work.

It's unfortunately a bit of both. We introduced a breaking change with S6 affecting a very small percentage of edge cases.

However, it seems that there is a good workaround/solution to this breaking change that works for all, but we have not yet got around to updating the readme file. Not your fault bud.

Thanks again for getting back to us! Having the podman details will help a few others without a doubt.

@witten
Copy link
Collaborator

witten commented Jun 3, 2024

A quick note: You may be able to omit encryption_passphrase from your borgmatic configuration entirely (unless you prefer it there to be explicit), since borgmatic should pass through $BORG_PASSPHRASE from the environment to Borg when encryption_passphrase isn't set.

@johnmmcgee
Copy link

This is my final quadlet file, that works. I was having issues with the container properly reading the TZ data, as I usually set it to EDT or have it set via the /etc/localtime bind mount, and either usually suffice.. However, it seemed i needed to state the Region properly for it to work with this container.

# borgmatic.container
[Unit]
Description=borgmatic container

[Container]
Image=ghcr.io/borgmatic-collective/borgmatic:1.8.11
ContainerName=borgmatic
AutoUpdate=registry
Environment=S6_KEEP_ENV=1
Environment=TZ=US/Eastern
Exec=borgmatic --verbosity 1 --list --stats
Secret=borg_passphrase,type=env,target=BORG_PASSPHRASE
SecurityLabelDisable=true
Volume=/var/mnt/photos:/backups/photos:ro         # this is just an example
Volume=/var/srv/containers/borgmatic/restore:/restore:z
Volume=/var/srv/containers/borgmatic/etc_borgmatic.d:/etc/borgmatic.d:Z
Volume=/var/srv/containers/borgmatic/config_borg:/root/.config/borg:Z
Volume=/var/srv/containers/borgmatic/ssh:/root/.ssh:Z
Volume=/var/srv/containers/borgmatic/cache_borg:/root/.cache/borg:Z

[Service]
Restart=no

[Install]
WantedBy=multi-user.target default.target

@BBaoVanC
Copy link

I was running into this issue in a slightly different situation, when trying to open an interactive shell with docker compose run --rm borgmatic bash. This used to work, but it seems now I have to use --entrypoint.

Using docker compose run --rm --entrypoint bash borgmatic works to get an interactive shell with BORG_PASSPHRASE, and as a bonus the shell isn't whatever horrible thing it was giving me before (it didn't even have a shell prompt!).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants