Permanent errors <metadata>:<0x611> after losing cache/log #10479
-
Hello,
Scrubbing did not helped. It is usualy reccomended to destroy the pool, create it again and restore from backup. Which is something i would really prefer not to do. Certainly not on zettabyte of data. Since the corruption seems to be relatively minor, is there something i can do to restore the pool consistency without having to recreate whole pool... Eg. overwrite some files with corrupted metadata or recreate just one specific dataset, which is to blame. Is there way to tell, what that I tried to do BTW if i do send/receive of such pool, will the metadata corruption replicate as well? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
@Harvie the checksum errors under the mirror are not related to log or cache. Since this is in metadata, zdb is a better tool: How corrupted data is handled by send is configurable. However, it likely doesn't apply to this case because much metadata is not sent. For reference, Lastly,
|
Beta Was this translation helpful? Give feedback.
@Harvie the checksum errors under the mirror are not related to log or cache.
Since this is in metadata, zdb is a better tool:
zdb -dd tank 0x611
should provide more details. More
-d
options goes further into detailHow corrupted data is handled by send is configurable. However, it likely doesn't apply to this case because much metadata is not sent. For reference,
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/ZFS%20on%20Linux%20Module%20Parameters.html#zfs-send-corrupt-data
Lastly,
zpool status
command recommend destroying the pool. Such a message can occur due to othe…