-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Support OCI artifact pulling #43
Comments
I'm definitely interested in how to make this work with OCI artifacts. This really opens up some interesting cases. One way we could hack this without OCI artifacts is to cram config in as a "layer" with a special media type. As an example: {
"rootfs": {
"diff_ids": [
"sha256:<config digest>",
"sha256:<rootfs digest>",
...
],
"type": "layers"
}
} Then the media type on the the "config" layer could be something like |
That said, just having the config in a normal layer that's separate from the wasm modules (and each wasm module being separate) could really have the same affect. |
@cpuguy83 yeah it's interesting. I've referenced this in the other thread as well. Usability wise, this would dramatically benefit function dev and deployment as a "simple single immutable graph"; on the other hand, the way we currently expect a cluster operator to understand an installed application will change. There's another issue I'd love your take on. We're thinking that we're overriding the I'm very interested in your
Interesting. So in this case I don't like the yet another media type approach, but at the same time everything else works smoothly here. This feature would add the ability to ship the same host binaries but modify the config separately because it's a different "image". There's some collateral damage to the image |
this can be supported with runwasi v0.4.0 and #164 demonstrates this |
The scenario is this. Right now, partly because we were trying to exercise the containerd shim and partly because there wasn't an oras-rs crate we could use from source our app hosts -- whether spin, slight, or future hosts -- can't give us a .toml because either we alter how the runtimes acquire their configs in order to integrate with k8s -- something that doesn't make a lot of sense. Even if we use volume mounts for the config files, you don't want the operational experience to ship separately. Instead, you want one reference to an immutable config of a host and a module. You want the joy of using the
image
value of the pod spec to point at "the entire thing that runs correctly" -- which includes more than one artifact.The way we do that without OCI Artifacts is by building a scratch container and dropping both the module and the config for the runtime into it. That's fine, but we need to build a container in order to not use the container. :-) This is entirely a point-in-time thing, as an older friend used to say. No immediate hurries.
BUT: we do need to have a plan to a) support both this method and the OCI Artifact method (oras) and b) understand what that would mean for the yaml experience. We EITHER need to re-use the
image
key and take both and just do the right thing (check with oras and if not, use docker) OR we do a hard roll to oras and commit, re-using the image key for artifacts only. Alternatively, we could add anartifact
key and have the runtime punt unless it was either/or but not both. (Problem with that scenario is that no one would recognize the artifact key as schema valid. :-( )In any case, there is prior art we might be able to use. https://docs.rs/oras/latest/oras/struct.Client.html never got finished, but Sajay says he'd love to fund that. BUT... also Jacob LeGrone in the CNAB space did https://crates.io/crates/oras so I'll reach out to see where that codebase is. Seems to be private. But if he is willing, we can bring that up to date pretty rapidly.
The text was updated successfully, but these errors were encountered: