-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support updating air-gapped instances #261
Comments
@quickwind thanks for the feedback! Your report is a bit dense so in the future it may be worth splitting it in multiple pieces to track separately. The first question on customized build should be covered at build-time by https://github.com/coreos/coreos-assembler and at runtime by https://github.com/projectatomic/rpm-ostree. That is, you can build a fully customized OS image by tweaking our configuration, or you can overlay packages from an RPM repo. The second question should be already tracked at #240 and #241. If you have any additional unconventional requirements, feel free to add them there. |
Thanks @lucab , but I don't think these issues cover the question, because #240 is talking about mirror servers and #241 is talking about K8S env, but in our use case, we are only allowed to deploy one virtual appliance based on FCOS without internet access, we should be able to build an update binary and apply it via uploading via a web portal provided by the VM. Of course we could embed a mirror rpm-ostree repo inside the VM, but thinking it might be an overkill and wondering if there is any other shortcut path to do it in a lightweight fashion (i.e. build a binary update package and run some command to commit it into the other partition and boot the VM into it). Really hope there will be official tools to support the whole process, I think it is a valid use case:) |
@quickwind sorry then, I thought your environment was like one of those described in #240. Can you please add some details about how you are currently performing Container Linux updates? Do you use update-engine? And locksmith? How is the payload injected into the VM? |
@lucab actually we didn't implement the upgrade yet, our product just at version 1.0.0, but this is definitely a feature we need to do next. our goal is to have the upgrade to cover both OS and application levels. Previously I just did a pre-study of the options with container linux and came across this discussion: coreroller/coreroller#5. But now seems FCOS will replace container linux, I may wait for FCOS to see how we could upgrade our installations from container linux based to FCOS based, and the way to move forward. I guess it ought to be a chain of tools to support the whole upgrade cycle: build, package, upload, apply/rollback. All these are based on files, no network involved (except for the build part, which may need to sync with official stream in FCOS and other repos), and should be able to carry out in enterprise environment. |
So, if I understood correctly this time, this pretty much sidesteps all the auto-update logic, as the node itself is airgapped.
|
@lucab yes, correct. The steps sound really promising, to me it looks like "docker save" -> "docker load" -> "docker rm/run"😃 I guess the principal is similar right? I mean it is like "docker build" to build the different ostree commit, and then "docker save" to save certain image layers, and then upload the saved blob files to air-gapped node, then "docker load" to load it into local rpm-ostree (like docker images), and then "docker rm" and "docker run" to use new image? |
I had a quick chat with @jlebon about this, and while this operation is already possible via some arcane incantation of |
@lucab, thanks for the update! |
Has there been any progress on this one? I'm kinda in the same boat here. The requirements are a completely air gapped environment so ideally the needed files would be put on a shared network storage or USB drive and then copied to each host and installed manually. I was playing around a little with the rpm-ostree manual and I could be wrong but there is the --download-only and --cache-only flags for the deploy command which could potentially do the trick. Would a valid approach be to?
What I'm struggling with is to find the actual cache location on disk to test this and see how easy it would be to transfer the cache files (how many are there, etc.). Is there a better way? Any pitfalls to the above? @lucab Any input is greatly appreciated :) |
@tkarls there have been some movements, but lots of parts are still missing here in order to complete the flow. The key primitives that need to be completed are:
|
@lucab Thanks for the hints! Yes, I realized that Zincati is not just picking the latest available and install it... It's much more complicated to find a valid path. and with some adjustment from the script found here https://github.com/openshift/cincinnati/blob/66425e6ba143bb7b7f3794331d4312fcba47c94c/docs/design/cincinnati.md#traversal I think I can manage to sort out the traversal. although not as convenient as using Zincati of course! Thank you for the hint of the local rebase. I will proceed some today and see if I manage to obtain the binaries needed and perform an upgrade. I'm also starting to considering if running a local cincinnati server and a custom fleet_lock for server for the zincati server would be easier. Especially in the long run... That could be deployed in the same air gapped environment and hopefully be updated periodically (when given internet access) by "mirroring" the official releases and metadata. But without proper docs I'm a bit lost in the dark. And this is #240 anyway. |
There's been a lot of discussion in ostree upstream about this - most people are putting archive repos on USB keys. There's even an Endless Mobile added a lot of code for "collections" so that e.g. machines can also update in a peer-to-peer fashion - once one machine in e.g. a computer lab has the update pulled from a USB key, other machines can pull from that one. |
One possibility is to do the same thing we're doing for OpenShift - export our updates as container images. Most people doing offline mirroring also need to do it for container images, and by "encapsulating" this way there's just one thing people need to understand how to mirror. |
We were chatting about this in IRC today and one option that was voiced was to re-use the ostree repository inside the live-ISO for that. The flow is currently a bit unfriendly but looks more or less like this:
Notes on the steps above:
|
Two additional points that emerged:
|
Linking #812 which should make it easier to update FCOS in air-gapped environments |
#812 is done, which helps avoid needing to understand ostree and custom origins etc. However this issue is really strongly related to #1263 where I am arguing that we should basically stop having an upgrade graph. And once we do that, mirroring becomes dramatically simpler. In fact, I'd say mirroring is really the most simple case of coreos/fedora-coreos-docs#540 where you're not changing the OS, just its source. |
Is this still the recommended approach for creating "offline" installation media? If so, are there docs anywhere for how to add images to the ostree repo inside the live ISO?
|
How to set up a CoreOS upgrade source in an internal network environment since virtual machines cannot directly access the internet, and how can CoreOS upgrade through internal network upgrade sources? The system has been upgraded through the coreos live os system image using the following steps. sudo ostree pull-local --remote fedora --gpg-verify /mnt/rootfs/ostree/repo |
We currently uses CoreOS Container Linux in our customer shipping VM environment as a virtual appliance, and for most of the cases, the VM doesn't have external internet connection.
Question is after moving to Fedora CoreOS, first how we are going to customize it (i.e. adding/removing packages)? Hopefully there will be build tooling to facilitate the process so that we could build it into our own CI lifecycle while still be able to sync with FCOS public streams.
Secondly, how we can manage an upgrade in a totally offline environment? Our goal is if it could support a incremental upgrade package for us to land it in the VM and initiate the upgrade when user confirms.
Thanks!
The text was updated successfully, but these errors were encountered: