Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GSoC] Rewrite KubevirtCI into GO #257

Open
xpivarc opened this issue Feb 6, 2024 · 15 comments
Open

[GSoC] Rewrite KubevirtCI into GO #257

xpivarc opened this issue Feb 6, 2024 · 15 comments
Labels
kind/enhancement lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@xpivarc
Copy link
Member

xpivarc commented Feb 6, 2024

Title: Rewrite KubevirtCI into GO

Description:
KubevirtCI is a core project of Kubevirt that allows us to provision ephemeral clusters for our CI. There are 2 types of clusters that we usually need and can be provisioned, a VM-based cluster or a Kind cluster. These differ because a VM cluster needs pre-provisioned VM images that contain everything needed to launch the Kubernetes cluster within the VM. The provisioned image is split into 2 parts, a base OS image which is then enhanced by relevant Kubernetes binaries. Each Kubernetes version will therefore have a new unique image.

Goal:
The goal of this project is to allow easy provisioning of the base image and to build a specific Kubernetes version on top of it. The current project KubevirtCI can be used for inspiration but the result of this project should be a library/framework written in the Go language. It should allow us to carry fewer duplicates between each Kubernetes versions and allow for wider contributions than a bash-based project.

Link & resources
https://github.com/kubevirt/kubevirtci
Project size: 350 hours
Required skills: Golang, bash, scripting
Desired skills: Experience with VM provisioning tools and/or image builders
Mentor: Luboslav Pivarc [email protected], Antonio Cardace [email protected]

How and where to search help

First, try to check KubeVirt documentation [1], we cover many topics and you might already find some of the answers. If there is something unclear, feel free to open an issue and a PR. This is already a great start to getting in touch with the process.
For questions related to KubeVirt and not strictly to the GSoc program, try to use the slack channel [2] and the issues [3] as much as possible. Your question can be useful for other people, and the mentors might have a limited amount of time. It is also important to interact with the community as much as possible.

If something doesn't work, try to document the steps and how to reproduce the issue as clearly as possible. The more information you provide, the easier is for us to help you. If you open an issue in KubeVirt, this already guides you with a template with the kind of information we generally need.

How to start

  1. Try to follow a guide, play with the started cluster.
  2. Try doing small changes and test with guide
  3. Understand the high-level design, and how the provisioning and running of a cluster works.

How to submit the proposal

The preferred way is to create a Google doc and share it with the mentors (slack or email work). If for any reason, Google doc doesn't work for you, please share your proposal by email. Early submissions have higher chances as they will be reviewed on multiple iterations and can be further improved.

What the proposal should contain

The design and your strategy for solving the challenge should be concisely explained in the proposal. Which components do you anticipate touching and an example of an API /CLI are good starting points. The updates or APIs are merely a draft of what the candidate hopes to expand and change rather than being final. The details and possible issues can be discussed during the project with the mentors which can help to refine the proposal.

It is not necessary to provide an introduction to Kubernetes or KubeVirt; instead, candidates should demonstrate their familiarity with KubeVirt by describing in detail how they intend to approach the task.

Mentors may find it helpful to have a schematic drawing of the flows and examples to better grasp the solution. They will select a couple of good proposals at the end of the selection period and this will be followed by an interview with the candidate.

The proposal can have a free form or you can get inspired by the KubeVirt design proposals [4] and template [5]. However, it should contain a draft schedule of the project phases with some planned extra time to overcome eventual difficulties.

Links

[1] https://github.com/kubevirt/kubevirt/tree/main/docs
[2] https://kubernetes.slack.com/archives/C0163DT0R8X](https://kubernetes.slack.com/archives/C0163DT0R8X
[3] https://github.com/kubevirt/kubevirt/issues
[4] https://github.com/kubevirt/community/tree/main/design-proposals
[5] https://github.com/kubevirt/community/blob/main/design-proposals/proposal-template.md

@aerosouund
Copy link
Member

Hello there
I'd like to express my interest to work on this as part of GSoC 2024. for starters i've found this guide to get started with the tool: https://github.com/kubevirt/kubevirtci/blob/main/K8S.md

In the meantime, any other resources do you recommend looking at ?

@Ayush9026
Copy link

Hi @xpivarc sir,

I'm also interested in the KubevirtCI rewrite project. Could you please consider me for the project?

@xpivarc
Copy link
Member Author

xpivarc commented Mar 4, 2024

Hi @aerosouund and @Ayush9026,
I have updated the description. Please let me know if anything is unclear.

@aerosouund
Copy link
Member

aerosouund commented Mar 5, 2024

@xpivarc
After having looked and interacted with the code a bit i'm trying to formalize the problem statement a bit, so the end goal is to write a go program that does the following things:

  • check if the system running has virtualization capabilities
  • detect the container runtime running on the machine
  • Start the gocli container, which in turn spins up the registry and the nodes containers with memory limits based on what has been supplied the command line arguments
  • launch the cluster nodes based on the $KUBEVIRT_PROVIDER
  • do an exec into those launched nodes to make sure they are ready
  • check the different parameters (istio, rook ceph.. etc) and apply them proper if specified by the user and wait till they are ready if necessary
  • create some functionality to update the dependencies of the project (like centos for example)

please feel free to add or remove points if you see fit

@Ayush9026
Copy link

Hi @xpivarc sir

Excited to contribute to rewriting KubevirtCI in Go for GSoC! With a roadmap focusing on environment setup, container orchestration, cluster provisioning, parameter handling, dependency management, testing, documentation, and maintenance, I'm confident in delivering an efficient and customizable solution.

Let me know if need further adjustments!

  1. Environment Setup:

    • Ensure the system has virtualization capabilities.
    • Detect the container runtime running on the machine.
  2. Container Orchestration:

    • Start the gocli container, responsible for spinning up registry and nodes containers.
    • Customize container memory limits based on command-line arguments.
  3. Cluster Provisioning:

    • Launch cluster nodes based on the $KUBEVIRT_PROVIDER.
    • Verify node readiness by executing into the launched nodes.
  4. Parameter Handling:

    • Check specified parameters like Istio or Rook Ceph.
    • Apply configurations and ensure components are ready if specified by the user.
  5. Dependency Management:

    • Create functionality to update project dependencies, such as CentOS images.
  6. Testing and Debugging:

    • Implement thorough testing for each component.
    • Debug any issues encountered during testing to ensure stability and reliability.
  7. Documentation and Contribution:

    • Encourage contributions from the community and provide guidelines for contributing.
  8. Cleanup and Maintenance:

    • Regularly maintain and update the program to address any issues or improvements.

By following this roadmap, we can develop a Go program that effectively provisions clusters for KubeVirt, with support for customization and maintenance.

@aerosouund
Copy link
Member

as it stands, a good majority of what this project does is being a wrapper around gocli commands, is the goal to:

  • write another wrapper over gocli that sends commands to it and extending its functionality in the areas where it didn't work before (setting up rook, istio, or any other functionality that was carried by the bash files)

  • integrating gocli into the solution, where in addition to all the commands already in it it will get extras like /cli cluster-sync and /cli runtests --$TESTARGS

  • rewrite the functionality of gocli into a more broad tool, keeping nothing from it but using it as a reference

for example, this proposed api will certainly exist (or atleast a similar version of it) in the solution. how it works will influence alot of how the other stuff will work

kubevirtci cluster-up --slim --provider 'k8s-1.29' \
    --prometheus='true' --grafana='true' --istio='true' \
    --swapon='true' --ksm_on='true' --nodes=2 

if this a wrapper then there will be a --gocli_container '[quay.io/kubevirtci/gocli:latest](http://quay.io/kubevirtci/gocli:latest)'

@xpivarc
Copy link
Member Author

xpivarc commented Mar 11, 2024

@aerosouund

@xpivarc After having looked and interacted with the code a bit i'm trying to formalize the problem statement a bit, so the end goal is to write a go program that does the following things:

* check if the system running has virtualization capabilities

General advise is to simplify when possible, checking is nice to have but in the end we just fail if we don't check and the system doesn't have the capabilities.

* detect the container runtime running on the machine

Again, nice to have but we can have an command line option in the beginning.

* Start the gocli container, which in turn spins up the registry and the nodes containers with memory limits based on what has been supplied the command line arguments

+1

* launch the cluster nodes based on the $KUBEVIRT_PROVIDER

Yes, or we can switch to command line option.

* do an exec into those launched nodes to make sure they are ready

+1

* check the different parameters (istio, rook ceph.. etc) and apply them proper if specified by the user and wait till they are ready if necessary

+1

* create some functionality to update the dependencies of the project (like centos for example)

Yes, but this is more of improvement.

please feel free to add or remove points if you see fit

@xpivarc
Copy link
Member Author

xpivarc commented Mar 11, 2024

as it stands, a good majority of what this project does is being a wrapper around gocli commands, is the goal to:

* write another wrapper over gocli that sends commands to it and extending its functionality in the areas where it didn't work before (setting up rook, istio, or any other functionality that was carried by the bash files)

I leave this up to you and your proposal. The goal is to remove bash and have a reusable Go code.

* integrating gocli into the solution, where in addition to all the commands already in it it will get extras like /cli cluster-sync and /cli runtests --$TESTARGS

* rewrite the functionality of gocli into a more broad tool, keeping nothing from it but using it as a reference

for example, this proposed api will certainly exist (or atleast a similar version of it) in the solution. how it works will influence alot of how the other stuff will work

kubevirtci cluster-up --slim --provider 'k8s-1.29' \
    --prometheus='true' --grafana='true' --istio='true' \
    --swapon='true' --ksm_on='true' --nodes=2 

This looks good.

if this a wrapper then there will be a --gocli_container '[quay.io/kubevirtci/gocli:latest](http://quay.io/kubevirtci/gocli:latest)'

@aerosouund
Copy link
Member

@xpivarc
Thanks for the clarifications, and i do agree with you on the points you mention about the lack of a need to check for virtualization and pass the container runtime as a flag.

the way i envision this is indeed by extending gocli to include the bash functionality and remove the need for wrappers around it. meaning the end product won't include --gocli_container '[quay.io/kubevirtci/gocli:latest](http://quay.io/kubevirtci/gocli:latest)' as it's no longer going to run as a container but rather on the host itself.

i see this approach as ideal because it removes the amount of layers in the project as well as leveraging the established patterns in the cli when it comes to running a container, execing into it.. etc

some of the essential apis to be added is the cluster up api as explained, and:

  • a kubectl command api, where the way to interact with the cluster will be kubevirtci kubectl get nodes
    the kubectl extra command is needed because i cannot envision us implementing each kubectl as a viper cmd. if we manage to solve this problem it would be more ideal to just do kubevirtci get nodes

  • test runner api: kubevirtci runtests --$ARGS

  • cluster down api: kubevirtci cluster-down

  • cluster sync api: kubevirtci cluster-sync

  • build provider api kubevirtci build-provider $PROVIDER. most of this functionality already is in the gocli (the provision command). we should just port over all of the bash functionality into this command. or create another one that will call the provision function in the cli

let me know what you think, i will try to formalize these ideas more over the coming days and to add more apis according to what i find in the codebase

@xpivarc xpivarc changed the title DRAFT: [GSoC] Rewrite KubevirtCI into GO [GSoC] Rewrite KubevirtCI into GO Mar 12, 2024
@aerosouund
Copy link
Member

@xpivarc i have sent you an initial version of the proposal on slack, please check it and let me know if you have any suggestions

@xpivarc
Copy link
Member Author

xpivarc commented Apr 1, 2024

Reminder, don't forget to submit a proposal through GSoC by 2nd April - 18:00 UTC.

@kubevirt-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 30, 2024
@xpivarc
Copy link
Member Author

xpivarc commented Jul 1, 2024

/remove-lifecycle stale

@kubevirt-bot kubevirt-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 1, 2024
@kubevirt-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 29, 2024
@kubevirt-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants