-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: Migrating linked ZFS clone disks: not possible #28
Comments
I have no idea what linked clones are. But I would say that your storage config is wrong.
proxmove tries to connect to your old-cluster "overlords" to find the VM "abtest.xxxxx.de" (vm 123?). Through the API, it has found (I presume) Proxmove concatenates your base In either case: proxmove calls ssh to the "overlords"-storage at 192.168.202.132. There it tries to create a temporary snapshot, so the data can be transferred. This snapshotting fails because the Something in your config is likely wrong. If you find the zfs zvol where the source For reference: ssh -A [email protected] \
zfs snapshot data/base-9000-disk-0/[email protected] &&
zfs send -Rnv data/base-9000-disk-0/[email protected] &&
zfs destroy data/base-9000-disk-0/[email protected] |
I'm guessing he's talking about linked zfs clones. Repeat for however many identical VMs you need. The benefit is that they all share the same source and the clones will only take up space if it differs from the source. It's handy for setting up a virtual windows workstation, sysprepping it, then making 50 identical copies for a hacky/not-so-expensive VDI. |
@darkpixel is right. About the linked zfs clones i found the following explanation in proxmox docs : https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_copy_and_clone From my perspective i've done the configuration right :
ZFS List
Further i've already migrated successfully one VM with this script which was a full-clone instead of a linked-clone. Cloud my config still be wrong ? |
Ok. I guess that looks legit then :) In that case the ZFS has a def get_volumes(self):
if 'volumes' not in self._cache:
volumes = {}
for key, value in self.get_config().items():
if PROXMOX_VOLUME_TYPES_RE.match(key):
location, properties = value.split(',', 1)
if location == 'none':
volume = ProxmoxVolume(None, properties)
else:
storage, location = location.split(':', 1)
storage = self.cluster.get_storage(self.node, storage)
volume = storage.get_volume(location, properties)
volumes[key] = volume
self._cache['volumes'] = volumes
return self._cache['volumes'] You could try and see what happens if you change the location:
replace with:
That might work. (But then the destination will lose any notion of a linked clone.) Some more debug output from your side also helps. Run with
|
Here is the result :
|
So, I suspect it might work if you get that ssh -A to [email protected], and then: # zfs list -r -t all data/base-9000-disk-0 Find the snapshot and then: # zfs send data/base-9000-disk-0@SNAPSHOT |
ssh [email protected] zfs recv data/base-9000-disk-0 Possibly? |
Ok. This is not something I'm willing to spend time on. Nobody I know uses linked clones. I'll leave it open with a wontfix label for now. |
When i try to migrate a VM which has a "linked-clone" Disk i get the following error
The text was updated successfully, but these errors were encountered: