You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When clone volume exist and we delete the source volume, the daemonset gets flooded with the error log saying
E1110 11:34:19.572435 1 zfs_util.go:597] zfs: could not destroy snapshot for the clone vol zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377 snap pvc-728b13d6-9a46-467f-a35f-782abd07f377 err exit status 1
E1110 11:34:19.572475 1 volume.go:251] error syncing 'openebs/pvc-728b13d6-9a46-467f-a35f-782abd07f377': exit status 1, requeuing
I1110 11:34:49.563328 1 volume.go:136] Got update event for ZV zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377
I1110 11:34:49.563379 1 zfs_util.go:592] destroying snapshot pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377 for the clone zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377
E1110 11:34:49.572847 1 zfs_util.go:670] zfs: could not destroy snapshot pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377 cmd [destroy zfspv-pool/pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377] error: cannot destroy 'zfspv-pool/pvc-9c8f4b79-dc9f-4dd1-84a6-9668236ab031@pvc-728b13d6-9a46-467f-a35f-782abd07f377': snapshot has dependent clones
use '-R' to destroy the following datasets:
zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377
E1110 11:34:49.572878 1 zfs_util.go:597] zfs: could not destroy snapshot for the clone vol zfspv-pool/pvc-728b13d6-9a46-467f-a35f-782abd07f377 snap pvc-728b13d6-9a46-467f-a35f-782abd07f377 err exit status 1
E1110 11:34:49.572921 1 volume.go:251] error syncing 'openebs/pvc-728b13d6-9a46-467f-a35f-782abd07f377': exit status 1, requeuing
Here what is happening is since there is a clone volume present, the destroy will fail because of the clone volume. The volume mgmt will keep on trying to delete and keep on failing with the error until we delete the clone volume. This will flood the log with unnecessary error messages.
The text was updated successfully, but these errors were encountered:
When clone volume exist and we delete the source volume, the daemonset gets flooded with the error log saying
Here what is happening is since there is a clone volume present, the destroy will fail because of the clone volume. The volume mgmt will keep on trying to delete and keep on failing with the error until we delete the clone volume. This will flood the log with unnecessary error messages.
The text was updated successfully, but these errors were encountered: