Replies: 4 comments 6 replies
-
I am trying to upgrade my school's open source software mirror site, which doesn't care write performance. |
Beta Was this translation helpful? Give feedback.
-
You didn't tell anything about what your users actually demand, only that YOU want to have things as fast as possible. Do your users really care if they e.g. need to wait an additional minute downloading some Linux ISO? Additionally, you didn't mention how much data of which type you are actually storing, how old things at least need to be, how much free space you have right now etc. Remember that ZFS supports different compression algorithms and depending on your kind of data and CPU power and stuff, it might be worth it applying the highest possible compression level to data, possibly resulting in X % of your pool staying free anyway already.
If (random) reads are your main focus, you might as well consider a 3-way mirror. Things heavily depend on how much storage space you actually need, how much you might benefit from compression and stuff. There are some discussions about a 3-way mirror performing best for reads, while others don't seem to see those benefits. Things might depend on OS and stuff as well, but might be worth a try for your actual setup. https://forums.freebsd.org/threads/zfs-read-performance-of-mirrored-vdevs.59879/#post-343931
You might want to consider using some ZRAM-device as well, which makes some parts of your RAM becoming compressed. Though I'm not sure if this isn't already the case for the ZFS ARC when applying compression to data storage. https://en.wikipedia.org/wiki/Zram |
Beta Was this translation helpful? Give feedback.
-
This post further signifies that the documentation/behavior of ZFS needs to specify/notify the end user of expected performance changes when disk space is utilized at "high" percentages. The other option is to reserve space at the time of vol creation so that performance degradation isn't an issue. It's a bit dangerous on the enterprise side if a vol gets 100% b/c some sysop forgot to reserve space back in the day. |
Beta Was this translation helpful? Give feedback.
-
Real problem here is permament performance degradation when you cross mark (96 in today's version) even after free up space to have this 20 % free because structure of file system cannot change so it cannot be defragmented and algorithm to seek for free space is not so good when fs is very fragmented - real issue is fragmentation of free space which cannot be fixed Only way to fix is send | receive which could be problem in production. Documentation of this is very important |
Beta Was this translation helpful? Give feedback.
-
There are lots of discussions before but I don't know if they are still applicable for the newest version of zfs.
I just care about random read performance. Really need some advice.
Beta Was this translation helpful? Give feedback.
All reactions