-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions regarding Docker Image Composition #183
Comments
Hi @theAkito - answering your questions:
See also grocy/docs#7. |
I see. Well, when I was trying to figure out how I would set this server up, I was wondering, if I can replace this Nginx instance with an external one. If I am going to set up this server, it will be deployed via a self-manufactured Helm Chart on a single-node stock Kubernetes cluster, which already has an Nginx Ingress Controller, that can be freely configured, i.e. making the provided Nginx instance redundant, under the condition the external one is properly set up. I'm just not sure about this. https://github.com/grocy/grocy-docker/blob/main/Containerfile-frontend#L57-L58
Ah yes, that's what I feared could be the case. Though, if the backend does not actively expect this certificate, I guess an external Nginx would still work in a manual custom composition.
Exactly. Especially, if the cluster has this set up already, anyway. No need for another Nginx.
That's a good one. Would welcome progress on this. Thank you very much for the quick and thorough reply. It helps in understanding this server scenario. |
No problem, thanks! - your use-case makes sense, I think -- let me check: You'd like to deploy the Grocy If so: yes, the PHP proxy-forwarding that the frontend However: the Nginx Ingress Controller isn't intended to serve content (HTML, CSS, JS) -- and some of those requests (I think, if I understand correctly) will be for those resources -- so the Closely related to that: the |
Yes, at least the
I guess, that's the case. No way around having a container like this. Though, the least that can be done is to not fiddle with TLS in this scope, I suppose. I think the conclusion from this is, that the two images that are already there are pretty much usable in Kubernetes, though the TLS configuration should be made optional. Is that the case? |
That seems reasonable, yep - something along the lines of #184? (I'm not sure conditional behaviour or include-from-file statements are valid within |
#184 looks very reasonable. (Except the Dockerfile duplication.)
This could be easily circumvented to use a Dockerfile composition tools, though I'm not sure if this would be out of scope, i.e. too much work just for the sake of making TLS optional. That said, it's also possible to use stock Docker methods for making something optional. For example Maybe the middle path between the two solutions would be to have a configuration file in a volume mounted in an image, which is read on start or first initialisation. If an option in that configuration file tells the server to use a self-signed certificate, then it will generate it on the fly, if not, it won't touch the container. Instead of the configuration file, an ENV could be used. This is common across Docker images, anyway. Most importantly, all these solution proposals are better than having two Dockerfiles for the same stuff, but with different options. It just complects the server scenario in a bad way. |
Thanks for the additional ideas - I've been thinking about these and experimenting a bit. I like the certificate-on-demand approach - it does sound slightly complicated/risk-prone though: the co-ordination of a volume, configuration file, and runtime event handling (adding logic to detect during a request that a cert doesn't exist, and then generate one with the appropriate options). That would require careful design and planning. From some research, I did also take a look into making the container build conditional on env/arg parameters - it's possible, with the exception of the However: personally I quite like that the containerfiles don't contain any conditional logic - they are easier to understand when they perform a predictable series of steps. The Duplicating the file content definitely isn't ideal, but I think the cost of that would primarily be a maintenance burden -- and my guess is that not many people would use it. If that's true, then the duplicate file could be removed after a few months. |
An attempt to be a bit more precise about a design using
I think that this would allow a fairly straightforward way to build a static container that serves local traffic with TLS for most users in a zero-configuration manner. ❓ However - an important question is: what ACME server would the running container contact in order to request updated certificate material? For a private domain, this could be challenging. Worth noting: |
Self-reply: yep: for a private/internal domain name, this implies that only an internal/private certificate authority is an option, I think.
A detail note: this could produce some unexpected results currently: both (so the request would be considered valid to request a cert from an ACME server -- public or local. a public ACME server shouldn't be able to validate non-public domain like Back to practical matters, though: in the user-at-home case, the built-in |
The assumption is usually, in the cases of such servers, that there is already a configuration method implemented on application level and adding a certificate is a matter of just adding another option, which might be configured either via file or via ENV. In this scenario it is perhaps a bit different, as you already outlined, since, as far as I understand now, try to streamline & make the setup as easy & quick, as possible. It's also usually the case that the risk-prone aspect of that approach is minimised with fail-early-and-quick errors, which makes the user solve the instantly appearing errors so quickly, it appears more or less like a streamlined experience. I think the lack of streamlineness kicks in at the point, when something is erroring really late in the setup process, plus the error does not tell the user what is actually wrong, like e.g. "invalid path" instead of "Configuration file not found. This server needs a
Does not look bad, though I did not have experience with that software yet, so I cannot issue a qualified comment regarding it.
We can scratch that one. Does not seem like the right solution in this scenario, because it would require this conditional approach referred to previously, anyway.
I get that. I initially thought about infinitely keeping it "in sync" and maintaining double the amount of Dockerfiles. However, if you plan this as a phasing out method anyway, then it's not a too bad idea. It's just that, if, for whatever reason, the second Dockerfile won't become expendable, then we would be at the same stage of progress, as now. 😄
Sounds reasonable, though, as explained earlier, I am unable to respond with a qualified comment regarding that software. I would first need to look into it & research.
Heard that one often. There are a couple of people, who would like this, though, as far as I know, it's not really possible "the official way".
Indeed. I have never dived into that topic too deeply, though this seems like it is really the case.
It must fail, because the domain must actually be publicly available with a URL like
If it works in a way, which the article from |
An aside: it's annoying (and seems like a potential cause of Internet-of-Things problems -- including but not limited to vendor attempts to create lock-in) that in 2022, as far as I know, it's not possible to generate a certificate for a local device/application that browsers will display as trustworthy by default. I realize the technical reasons why -- and for a private domain like that there would have to be other well-defined and very-difficult-or-impossible-to-impersonate constraints that mainstream browsers would need to respect in order for the negotiation between a non-public device and a public issuer to result in trustworthy (in a human, practical sense) certificates... but it just feels like it should (again, must?) be possible somehow. Device attestation might be part of a solution, but it's tricky to figure out how that works in the context of elastically-scalable clusters. |
Some more rambling ideas:
|
@theAkito I'm closing this issue since I think the questions raised have been answered - I've also opened a separate issue to track experimentation with |
I've noticed, that the Docker image stack used in this scenario is using Nginx in some way, I cannot quite discern clearly.
https://github.com/grocy/grocy-docker/blob/main/Containerfile-frontend
nginx
in that Docker image pretty much just the proxy or what else is it doing?The text was updated successfully, but these errors were encountered: