Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exposing kind/k3s in the integration tests so that custom Docker images can be loaded while running the tests and other more advanced features can be done #44794

Open
mkonecny-atlassian opened this issue Nov 28, 2024 · 3 comments
Labels

Comments

@mkonecny-atlassian
Copy link

Description

Hi
for more complex integration tests where Kubernetes and K8s Operators (using Java Operators SDK) a more customizable access to the underlying kind/k3s tools would be useful.

In my case, we have a multi-module build where the operator service is built as a Docker image in one module, and the end-to-end tests in another one. I am trying to load the generated manifest in the K8s cluster (using Quarkus Dev Services and kind under the hood)

The Docker image I am trying to test is not present in the k8s cluster, and it's not possible to load it into the cluster as I cannot get hold of the KindContainer. It also doesn't have the ability to load the image in the first place and is missing some labels that will make it available to the kind command line tooling.

I can think of other reasons people will want to customize the k8s containers in local dev environment, like: adding labels, initialization parameters, loading custom docker images into the cluster...

I've discussed this issue in the chat as well, excerpt:

The container is created manually with a testcontainer module.
Devservices for Kubernetes Client make no difference of the choosen flavor. It will only provide kubeconfig to connect to the k8s api.
You should be able to implements a DevServicesContext.ContextAware QuarkusTestResourceLifecycleManager to get the devServicesProperties containing the quarkus.kubernetes-client.* properties to connect to the api. See this guide for an example.
What you will be missing is the docker kind cluster node. Unfortunately, we currently do not set the label io.x-k8s.kind.cluster on the container. That's why kind is not able to found it.
What can be done is to set this label with whatever value and output this value as part of the devservices properties map.

Meanwhile, you should be able to implement a QuarkusTestResourceLifecycleManager that will start the kind container and return the required kubernetes-client config and disable Devservices for kubernetes-client by setting quarkus.kubernetes-client.devservices.enabled to false. This way, you will have full control on the kind container.
However, this will not work in dev mode.

About having the Quarkus k8s deployment extension able to deploy to the devservices for kubernetes-client cluster, I don't know if it will work out of the box. IIRC, k8s deployment steps use build and runtime fixed configuration so it's not possible for devservices to provide the k8s client config.

Implementation ideas

Possible solutions I can think of:

  1. configure a local Docker registry within existing Docker and wire it up with the k8s cluster (https://kind.sigs.k8s.io/docs/user/local-registry/). This only solves the Docker image problem
  2. expose the KindContainer in the tests and allow for modifications (this would require eg loadImage method to be exposed)
@mkonecny-atlassian mkonecny-atlassian added the kind/enhancement New feature or request label Nov 28, 2024

This comment has been minimized.

Copy link

quarkus-bot bot commented Nov 28, 2024

/cc @geoand (kubernetes), @iocanel (kubernetes)

@metacosm
Copy link
Contributor

/cc @metacosm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants