Use separate containerd configs in aws-k8s and aws-dev #589
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
We were failing to write the containerd config in the dev image because
pod-infra-container-image wasn't set -- because it lives in the Kubernetes
settings we don't have in the dev image.
We don't need a pod-infra-container-image (or similar) setting in the dev image
because we use containerd through Docker which doesn't use it. So, the
clearest change is to split the containerd config file, removing the irrelevant
cri-plugin settings from the dev version, including the
pod-infra-container-image setting.
Testing done:
Built and launched an aws-k8s image; it connected to my cluster and pods worked. Logging in, I see the config files:
...and confirmed that
/etc/containerd/config.toml
is the k8s version withsandbox_image
set based onpod-infra-container-image
.Then I built an aws-dev image, connected and confirmed Docker containers still run OK. Logging in, I see the same two config file templates, and confirmed that
/etc/containerd/config.toml
is the new shorter version for dev: