-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fetch kni-installer binary from a release payload #401
Conversation
This PR needs openshift/origin#22619 and openshift-metal3/kni-installer#53 |
We now also require openshift/origin#22636 |
Looks like the fix we depend on is merged, via openshift/origin#22568 |
and the kni-installer patch is in. One more patch needed still open: openshift/origin#22619 |
This one is only needed because the |
@smarterclayton look at what these guys are pulling off! You and I have gone round and round on this. |
|
Thanks, probably best discussed in openshift-metal3/kni-installer#37 - for bare-metal IPI, I don't think the user should have to do anything to download the image before kicking off the install |
@markmc how do the build tags get set in this process? We need to build with ironic and libvirt. |
Yep, see https://github.com/openshift-metal3/kni-installer/blob/master/images/baremetal/Dockerfile.ci |
And
|
There's no straightforward way to get the installer pinned RHCOS version from an installer binary or a release payload, so let's not become reliant on this. We can easily pin dev-scripts to the correct RHCOS image each time we pin to a new release payload. Also, ultimately we need to get away from this way of caching a pre-defined RHCOS image version - we instead need a way of caching whatever version is driven by either openshift-install or machine config upgrades.
Now that we can build and publish release payloads with our installer fork, let's consume the installer the same way stock installer builds are intended to be consumed - by extracting it from the release payload. Note - we use extract --command=openshift-install to avoid roundtrip to tar.gz involved with using --tools.
Use mktemp --tmpdir so we use /tmp, remove the image-references file from tmpdir when we're done with it, move the wait_for_tag() function, and change wait_for_tag() to not wait for a specific image digest.
Allow additional images to be overwritten, rather than just the installer image. Also update the docs, and show how we are currently building with a custom baremetal-machine-controllers image.
I think this is ready to merge There are a few improvements still listed in the checklist above, but ... well, we're relying on a custom payload now, so let's get this merged and make those improvements later if necessary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tried the process yet but I think it looks good. When rebasing the installer, is it expected to build the image and then test, or should we try to maintain some functionality in dev-scripts to build from git without a release payloud?
I'd be open to a dev hack to make testing the installer changes easier ... but shouldn't be the default path |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's good to merge if CI passes then, is this the only PR that's needed?
Yeah, should just work with this PR Let's hold off for now - the deploy I kicked off earlier failed. Will look into it |
Build SUCCESS, see build http://10.8.144.11:8080/job/dev-tools/644/ |
Sorry, didn't get back to this - if CI has passed, and one person sees it working for them locally ... then I'm fine with merging |
I see that CI passed, but it's not working for me locally. master does work for me. Here's what I get:
|
I got it to work by upgrading my client tools. Script 01 ensures a 4.1 version of |
I saw same locally as well. Could we update Or just base it off the build date and check OC_DATE=$(date -d $(oc version -o json | jq -r '.clientVersion.buildDate') +%s)
[ $OC_DATE -ge 1558047075 ] || update_oc |
Yeah, I like that idea. We know the current verison is new enough, at least. Want to push that up as a follow-up PR we can merge right after this one? |
OK, I've got a change based on Stephen's suggestion ready that I'm just going to push straight to the repo after merging this PR since we discussed it here. |
The changes in PR #401 require a newer version of oc. Make sure that one that is already is installed is at least as new as one validated to work correctly. Co-authored-by: Stephen Benjamin <[email protected]>
Let's consume the installer the same way stock installer builds are intended to be consumed - by extracting it from the release payload. That way we're closer to being ready to transition to using stock openshift-install in future.
In order to do this, we need to produce a container image containing the installer, reference this image from a custom release payload, and drive the install (including installer extraction) from this custom payload.