Incorrect space accounting with ZFS? #16634
Replies: 3 comments 9 replies
-
For one I know that zfs reserves some space for itself so it can still continue working if it runs out of space. You can search for old issues about not being able to remove files after filling the pool for example, where there was no space to write the new transaction with the file removal or something like that. |
Beta Was this translation helpful? Give feedback.
-
There can be fixed size reserved. You can create zpool backed on sparse
file, just 'truncate -s 1T disk.2' and create zpool there. I bet you will
get more reasonable percent of metadata :)
…On Fri, Oct 11, 2024, 11:00 alpha754293 ***@***.***> wrote:
Thank you.
Yeah - I read about that in the ZFS Administration Guide
<https://docs.oracle.com/cd/E19120-01/open.solaris/817-2271/gbchp/index.html>,
but a quick, cursory check on that via the zpool get -p all and zfs get
-p all commands show that the current reservations are all zero.
I am also aware of the general, best practices recommendation not to fill
the pool > 80% (although I've filled it ~90% before but I didn't very too
much further into that. At 90%, it's starting to REALLY push the limits,
until this temporary excusion > 80% was eliminated (as I was shuffling data
around).
But for an empty, brand new pool, to reserve this much space for itself
(if that's what it is doing) -- seems quite excessive, as a percentage of
the total space.
—
Reply to this email directly, view it on GitHub
<#16634 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HM5XIOQRZ5Y3RVQCOLZ24ILTAVCNFSM6AAAAABPXYMZY2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOJQHE3DEMY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
You are indeed seeing a reserve, known internally as "slop". This is space that OpenZFS hangs on to to make sure it always has some working space. The detail of the math is in comments in the code but the tl;dr is:
So you're seeing that last case: the ~48M pool is < ~128M, so ~24M is reserved for the slop. |
Beta Was this translation helpful? Give feedback.
-
I am trying to do a really simple experiment with ZFS where I create a file using the command
dd if=/dev/zero of=disk.1 bs=4096 count=16384
so that it would create the requiste 64 MiB minimum size for the device to be used with ZFS.So that is the correct size.
And then I create a simple test ZFS pool
test
, using the command:zpool create test /home/proxmox/disk.1
That was successful.
Why does the SIZE and AVAIL differ by almost 50%?
And then why does the SIZE differ from the actual size by almost 25%? (i.e. it should show 64 MiB, but instead, shows it as being 48 MiB)
Does the metadata really take up 16 MiB on a 64 MiB file that is used as the vdev?
Based on what I can see here, I am struggling to understand why the SIZE and AVAIL is so far off from where I expected it to be.
Can someone please educate me about why I am seeing the different sizes that I am seeing, and how to interpret these results?
Your help in educating me is greatly appreciated.
Thank you.
edit This is with ZFS version 2.1.15. I've deployed this via Proxmox 7.4-19 as a Proxmox VM, mostly because I was able to deploy this really, really quickly, to test this. This isn't going to be used for anything other than this simple test.
Beta Was this translation helpful? Give feedback.
All reactions