-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: generate "base rhel" container image, build OCP on top #498
Comments
I like the idea! It sounds like decoupling into a few classes of streams to make bootstrapping and CI testing easier to manager.
One question is how would we tie these container images together? For example, if the |
The IOW the end goal here is that the lifecycle of this container is logically separate; either container can change independently without caring about the other, we just merge the result. |
I think the best way to view this proposal is our workflow for "test RHCOS with new RHEL minor version". With this flow, we produce one |
This is what I was looking for 👍 |
That would be extremely useful for OKD as we now have to build a full blown image just to ship a few RPMs. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
/label jira |
@travier: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale We're working toward that goal, just not there yet but the ostree-ext work might get us there. |
In the end this also kind of requires that we structure inputs to the base image to only come from RHEL for example, so that there's only one version number that matters. |
And a core problem with this is in some cases - specifically e.g. the live ISO, use cases that we have rely on kubelet existing there by default. That said, it may be the case that we could try to do this at the core - i.e. generate one RHCOS 8.5 build, and then further specialize/derive that build for multiple OCP releases, and generate disk images out of those. If we could get away with only having |
I have a variant of this in #799 that differs in important technical ways. |
Now that rpm-ostree is close to supporting "live updates", one thing we could do is move crio/kubelet into a separate
machine-os-kubelet
container or so, and also moveopenvswitch
as part of e.g. the SDN container.But these would still be treated as "first class" bits because they'd still be underneath the readonly bind mount in
/usr
etc. The MCO would learn to pull down thismachine-os-kubelet
container and apply updates from it too; and we can generalize that to N container images with M RPMs inside (or...perhaps not RPMs at all).Advantages:
The RHCOS bootimage is basically just RHEL, and this would greatly increase alignment with OKD since we'd use the same approach in both places.
On the bootstrap node, the crio/kubelet in use become exactly the same as the one shipped in cluster.
There wouldn't be anymore "CI -> shipping" gap for kubelet - when a PR merges to that repo it'd get rebuilt and shipped the same way all other containers do and not versioned with RHCOS at all.
Note in this we wouldn't be breaking at all the concept that the cluster owns and manages OS updates; we'd still be testing the OS and kubelet and cluster components all together as a unit in the end. The goal here is just internally split things up more so we can improve the process for CI and building; for example, the RHCOS version number would (mostly) just be a RHEL version number which would greatly increase clarity of how things work. We can be more agile with kubelet/crio etc.
The text was updated successfully, but these errors were encountered: