-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support migrating VMs between Proxmox nodes #270
Comments
Hi, I hope you have found the answer already. I wanted to share some information about the limitations of Kubernetes and Proxmox. Kubernetes has values that cannot be changed after they are set. Proxmox can only move the VM if the disk is part of that VM. In Kubernetes, PV/PVC is not part of the VM. That’s why the disk ID is 9999. There is no way to rename the disk either. The name of the block disk is the ID of the PV in Kubernetes, and there is no metadata for the block device — only the name. Lastly, Kubernetes has a feature called node drain. It moves the pod to a different place. This is the main idea of Kubernetes. PS. Please let me know why you need VM migration if you feel it is important for your situation. |
Thank you for the explanation. I am not that experienced with what fields in PV/PVCs can and cannot change, but I had the suspicion that this would be an issue when it comes to migrating VMs between Proxmox nodes. I'll give some background about our situation. We run some workloads that require local PVs for performance reasons, but don't have the option to be deployed in high availability mode. This means that those workloads cannot move from one Kubernetes node to another by simply draining the node. The live migration feature in Proxmox gives use the ability to live migrate an entire Kubernetes node from one Proxmox node, including disk. This way we are able to redistribute resources or free up a Proxmox node for maintenance without downtime to our workloads. So ideally we would indeed want to move pods and not VMs, but in practice this is sometimes not possible or causes downtime for a deployment. That is why I was asking if there would be a possible implementation where proxmox-csi-plugin does not impact the migration feature in Proxmox. But I agree that these workloads are quite atypical and we might have a bit of an edge case here. |
Feature Request
Description
While trying out the tool I noticed that VMs that have disk attached through proxmox-csi-plugin cannot be migrated anymore. Proxmox complains that the disks are owned by a different VM, namely the non-existent VM 9999. I searched if there was some way to force Proxmox to migrate the VM, but I could not find anything. I could only find ways that included a lot of manual work and downtime for the VM that will be migrated, which is not desirable.
Then I realized that even if Proxmox allowed the migration, this will probably mess up the administration of the PVs for the proxmox-csi-plugin. The volumeHandle and the nodeAffinity would not be correct anymore.
So my question/feature request is, would there be any way to support migrating VMs with disks provisioned by proxmox-csi-plugin? Ideally without any downtime to VM and/or pods, but if this is required than at least in an automated fashion and with minimal downtime.
Community Note
PS. Thank you for this awesome project, integrating Kubernetes with Proxmox is a great idea and the implementation works really really well.
The text was updated successfully, but these errors were encountered: