-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Database connection fail when in different network #28
Comments
I've been thinking about this, but I'm not entirely sure what the best solution is. Just having to add the networks manually to the container was the simplest solution and the spawned containers can use the following network mode to automatically get the same network setup :
My main worry about using exec for streaming dumps into restic is that it might affect performance, but I have never tried this properly. Could there also be cases were tools other than the standard ones are used? What other impact does streaming database dumps from the database containers themselves have? There might be cpu/memory constraints on these containers for example. Maybe it should be possible to support all options leaving that to each "backup type" entirely? I'm just thinking ahead were support for more things are potentially added over time and how users could extend the system to support their custom services (outside of just volume and database backups) |
I also noticed another problem that is a result of executing
Sadly I only have some small databases to test performance with, but for those I didn't really see a difference. We are probably trading one kind of internal network traffic for the other.
Are you referring to potential version incompatibilities between the tools we expect in the container and the tools actually available? The interface for
Of course there will be a cpu increase when doing the database dump, but this is to be expected and cannot be avoided. The tools like
Best would be some kind of plugin architecture I think. This would provide the most flexibility.
The question is whether simple volume backups would be considered a plugin as well. If so, either this plugin abstraction would need a way to communicate requirements for the container backups are run within (-> volume mounts) or |
Sorry for slow response. I'm going to try out dumping a larger database thought exec and see. I'm trying to wrap my head around all the advantages and disadvantages using |
No worries, I have been a bit busy myself, but I would like to get this tool working since it fits my usecase very well. :) |
There is an issue when a database that should be backed up is running in a different docker network than the backup container. This problem mostly arises in multi project setups using docker compose, but could also be reproduced by explicitly assigning the networks in that way. When both the backup container and the database container are not in the same network, first
rcb status
fails to execute the "ping" and dumping the database contents will also fail.There are a few possible solutions I can think of:
exec_run(...)
from the docker container api to execute a command inside the database container instead of executingpg_isready
,mysqladmin
etc. inside the backup container using the python subprocess api.I think solution number 1 isn't a good idea, because it would break the network isolation that docker networks provide and would require manual configuration by the user. Number 2 is possible, but would mean having to always retrieve the correct networks, adding them and cleaning them up later even for just executing
rcb status
. Number 3 is probably the best option. I already worked on changing database pinging to useexec_run
as a proof of concept and it works. This options would also allow to remove the dependencies on the database client packages from the backup container.Let me know what you think @einarf :)
The text was updated successfully, but these errors were encountered: